B) We will also have the card game Predictably Irrational. Feel free to bring your favorite games or distractions.
C) We usually walk and talk for about an hour after the meeting starts. There are two easy-access mini-malls nearby with hot takeout food available. Search for Gelson's or Pavilions in the zipcode 92660.
D) Share a surprise! Tell the group about something that happened that was unexpected or changed how you look at the universe.
E) Make a prediction and give a probability and end condition.
F) Contribute ideas to the group's future direction: topics, types of meetings, activities, etc.
Conversation Starter Readings:
These readings are optional, but if you do them, think about what you find interesting, surprising, useful, questionable, vexing, or exciting.
1)Georgism... In Space! Just and proper political economy for an interplanetary civilization
Georgism... In Space! - by Sam Harsimony
How did Georgism define economic land, and why is it important for tax policy in space?
Can you explain how Georgism can be applied to space resources like energy, matter, and physical space?
How might governments in space subsidize the collection of solar energy?
What are some difficulties with taxing matter in space, and how might taxes need to be adjusted to avoid distortions?
How can physical space in space be taxed, and what factors might be considered in assessing the value of land in space?
How can excludable resources like broadband spectrum and orbits be properly managed in space?
Why is it important to strike a careful balance when implementing taxation policies for space colonization?
I've seen talk about ending wildlife suffering, a goal which seems impractical, but while we're at ot. why not look at optimizing for pleasure for wild animals?
While ending wildlife suffering sounds like Fourier wanting to turn the seas into lemonade, that was a heartwarming video. I wonder what other approaches could there be to optimize pleasure for wild animals.
I'm extremely worried about AI risk lately, especially after the twin shocks of ChatGPT and Sydney. I want to do something to help. I live near DC and know people with political connections who could potentially help me get a policy meeting with a member of Congress. Two questions:
1. If I can get such a meeting, what specific policies should I propose? My own ideal is to ban all AI research, but I know that's an impossible ask.
2. Any insights on members of Congress who are particularly good targets to try and meet with specifically? I'm talking to my connections about this too, but I'm wondering if anyone here has knowledge specific to the AI risk field, in terms of which members may be receptive or have been receptive in the past, and have the power/motivation to try to do something about it.
1. Policy is a bit of an open question. This section of the online book Better Without AI (https://betterwithout.ai/pragmatic-AI-safety) may go into that, I haven't read that far yet. He has argued that AI mainly runs on hype, so if public perception turns negative, the whole thing would shut down. So policy may not be as effective as advertising.
Fucking hell, I love that G. E. Moore was dumb enough to argue that holding up his hands in front of him and saying 'Here is one hand, here is another' was enough to prove the existence of an external reality, Jesus, I am going to murder every philosophical position I disagree with with that one.
I later saw that there's some more to it, but the first link I saw said it was simply what I said. Actually, now that I saw the logical formulation, it really isn't better at all.
The trouble with appealing to common sense is that that is the death of philosophy. In particular, you really cannot appeal to common sense while trying to convince me I don't exist, because come on dude.
> In particular, you really cannot appeal to common sense while trying to convince me I don't exist
Per my other reply to you, I'm not trying to convince you that you don't exist, but merely that you don't have the existence you thought you did. This should not be surprising at all. Most cells in your body are replaced every 10 years. Are you the same person you were 10 years ago? What are the specific properties you ascribe to "existence" and "you"? This is not an obvious question, so why do you think you already have all of the answers?
As for Moore's proof, it's a simple argument in the end. Every argument of skepticism about the external world depends on concepts, evidence and even logic that we derived from the assumption that the external world exists, so if you then turn that around to question the existence of an external world, then you undermine all of the concepts, evidence and logic that form the core of your argument, so any such argument is self-defeating and necessarily less plausible than the external world just existing.
The existence I thought I did? I am perfectly willing to believe I am not actually physical, but that doesn't mean I don't exist.
Am I the same person from 10 years ago? No, but it is the same sentience. The witness has always been there.
About skepticism, it's at least partially derived from observation, but observing the world does not at all mean that the world is what it seems. I do think concepts, evidence and logic are all flawed and are just limiting viewpoints, useful to apprehend certain aspects of the truth but unable to contain all of it.
Likewise to eliminativism, first you had to read some philosophy to reach that conclusion, because you did not always believe you lacked consciousness. Your consciousness had to experience that philosophy, so eliminativism is using consciousness to overthrow consciousness.
> I am perfectly willing to believe I am not actually physical, but that doesn't mean I don't exist.
Nobody is claiming that you don't have some existence.
> Your consciousness had to experience that philosophy, so eliminativism is using consciousness to overthrow consciousness.
Nah, the information processes that constitute "me" threw out the fiction of consciousness after processing the information and being convinced by eliminativism and science as a whole.
It's not the same Moorean argument because what I'm throwing out is the qualia/phenomenal experience but not the perception, and the latter is all that's really needed.
I'll start: One of my personal favourites is Norm Macdonald's gag about Bill Cosby on Comedians in Cars Getting Coffee. Everything about that joke is perfect.
Truly awesome. A friend of mine used to play poker with Norm - he said he was so funny no one noticed he was winning. Perhaps they didn't care. Poker Patreon?
F-ing brilliant. Note how Jerry is all over the provenance -- he knows it's a gem. I suspect it's Ricky's joke, but by saying he doesn't remember where he heard it, Jerry can't ask for it... Comedians are fiercely competitive.
Considering how many jokes I've heard you'd think I'd be able to remember them better. Also depends what counts as jokes; the funniest stuff tends to be improv or machine translation games.
Not the best, but one that always stuck with me was from Mitch Hedberg. "I went to the store to buy candleholders, but they were out of candleholders. So I bought a cake."
I was reading through an old bookmark and saw that Scott was still looking to review Nixonland eventually. I read the book myself after that banger book review, but found that it's actually part of a quartet:
I've not read Before the Storm or Reaganland yet, but Nixonland flows near seamlessly into Invisible Bridge and I imagine the other two books segway cleanly too. I'd napkin that the whole thing is close to 150 hours of audiobook.
Even though Perlstein is quite partisan (he wrote an '05 book on how the DNC could take center stage again), I'm not sure anyone else had even attempted to write a consolidated history like this of American society/politics/discourse/mood for such periods of time.
I'd grown up around adults who made mention of things like Woodstock and Watergate or the '68 DNC riot, and even someone who still couldn't believe we'd elected an actor (Reagan) as president. It was all ancient history, with the current era being written on TV as and after the towers fell. Reading another generation's political experience connected *a lot* of dots in my zoomer mind. I'd defo push these books at any zoomer interested in politics frfr (or anyone seeking to understand the boomers).
I know there has been back and forth on this and it is not the wonder drug it was thought to be for a while but I've been taking a single aspirin before going out to shovel a large amount of snow for a couple years as a sort of voodoo ritual to protect myself.
I'm 70 now but have no diagnosed cardio issues, and have good cholesterol numbers, good blood sugar, healthy weight, good blood pressure and an echo cardiogram shows no buildup of bad stuff in my heart. In the summer I still ride decent mileage on my bike, though my 'century' rides are all metric now. I also sit and crank my old Schwinn on a turbo trainer in the winter until I think I'm risking dying of boredom.
But... when my dad was my age he died from his third heart attack so I figure taking an aspirin before I go out and perform what used to be a pretty reliable widow maker chore, hell, it can't hurt.
Does this make sense to anyone who actually knows about this stuff?
What you want to be careful about is the fact that aspirin has a significant anti-clotting function, and one thing you *don't* want to have happen is a brain bleed that doesn't get sealed off right away, i.e. a hemorrhagic stroke, either a big one, or even the very small kind that you don't notice but which degrades your brain over time.
I don't have any useful advice about this, this is something you want to discuss carefully with a physician, I'm just observing the tradeoffs. People dismiss the anticlotting issues with aspirin because they think "oh a GI bleed is less scary than an MI" but they forget about the stroke possibility.
The Biden Administration is obviously going to lose in the Supreme Court regarding the student-loan cancellation. And on the merits they should lose, though this Court would rule against them regardless.
Both the president and his staff know that's what's going to happen, and privately don't actually mind it. The whole thing is kabuki theater.
I have been getting a lot of usage out of the saying "if you're not going to do something, you might as well not do something big" recently. Everybody in politics seems to be following that mantra these days.
Having done a bit more reading on the case I would adjust my prediction somewhat. The standing of the two set of plaintiffs, one in particular, appears a good deal weaker than I was aware of. I'd still wager in favor of the Court ruling against the administration. But there does seem to be some fair chance of 5 justices agreeing that the plaintiffs' standing to sue over this topic just doesn't pass a smell test, leading to a procedural ruling rather than a merits one.
I only read about half of that article, but I think that may be the wrong take. People with bad mechanics rely on conscious thought, possibly in the same way that chess masters instinctively know the good move while amateurs have think quite hard. But chess masters go through a long stage of thinking hard about every move before reaching that level, and "don't think too hard" would be terrible advice for someone looking to improve.
Similarly, focusing on your muscular form etc is often an important part of building good movement habits. The end goal should be to make it unconscious but that's not necessarily the way to get there.
As I understand it, the problem was that people were being told to pay attention to their knees during rehab. Perhaps the issue is that they weren't gotten to the level of unconscious competence.
"“We know that early-life stress impacts the brain, but until now, we didn’t know how,” Baram said. “Our team focused on identifying potentially stress-sensitive brain pathways. We discovered a new pathway within the reward circuit that expresses a molecule called corticotropin-releasing hormone that controls our responses to stress. We found that adverse experiences cause this brain pathway to be overactive.”
“These changes to the pathway disrupt reward behaviors, reducing pleasure and motivation for fun, food and sex cues in mice,” she said. “In humans, such behavioral changes, called ‘anhedonia,’ are associated with emotional disorders. Importantly, we discovered that when we silence this pathway using modern technology, we restore the brain’s normal reward behaviors.”
So there's this cluster of positions called "Illusionism" which is about doubting the existence of consciousness to various degrees, whatever that means exactly. I'm very interested in understanding how people think about this in more detail, so if anyone here is sympathetic to that set of ideas, I'd like to hear from you! Like, what is consciousness, what exactly does your position say about it, and why do you think that? (And if it's applicable, what do you see as the objections to that position, and why are they unconvincing?)
If it's relevant, I'm mostly through Dennett's "consciousness explained", and I think I understand his model quite well.
Eliminativism is specifically about the ineffability of mental states, the "what it is like". The illusion of consciousness might be like the "illusion" of solidity. We know solid objects are mostly empty space, but this doesn't eliminate the category of "solid", it just reframes what "solid" means. Somewhat analogously, we know that ineffable qualia are incompatible with a scientific picture of reality, and rejecting the naive first-hand "ineffability" as deceptive will permit us to reframe and properly understand what's going on with "consciousness".
I think this "ineffability" is the same sort of woo that so troubled philosophers a century ago when they were trying to explain how inanimate matter could lead to life, and so they invented vitalism. Vitalism is nowhere to be seen now because progressive improvements in our understanding of the mechanisms of life shrunk the explanatory gap to the point where it seemed implausible that there would be anything left to explain once that process neared completion. I think the same pattern will repeat with consciousness.
I think the objections to this are well-known: p-zombies, Mary's room, etc. P-zombies just aren't conceivable. A p-zombie world in which all of our current philosophers arguing about consciousness are just automatons lacking this "ineffability" is observably indistinguishable from our current world; asserting that this is not actually our world just smacks of assuming the conclusion that this ineffability exists to begin with. We lose nothing meaningful by just accepting we are those automatons. I don't find this particularly objectionable because I'm also a Compatibilist about free will.
For Mary's room, I think the whole argument rests on her having "all knowledge". Basically, Mary has so much knowledge that she is able to answer an infinite series of questions about any conceivable topic that reduces to physical characteristics. Anytime you bring in some kind of infinity you start getting unintuitive results, and humans are notoriously bad at intuiting correct answers in such contexts. I think this is just another example.
Even if this were overcome in some revised Mary's room, I think there are a lot of reasons why Mary could still be surprised upon seeing red the first time (like the ability response), which are compatible with physicalism. There just isn't anything convincing there.
"For Mary's room, I think the whole argument rests on her having "all knowledge". Basically, Mary has so much knowledge that she is able to answer an infinite series of questions about any conceivable topic that reduces to physical characteristics."
I think that's a misleading analogy., The axioms of Peano arithemtic are finite and not even lengthy, but they still allow you to answer an infinite series of questions.
> The axioms of Peano arithemtic are finite and not even lengthy, but they still allow you to answer an infinite series of questions.
Indeed, and yet almost every mathematician in the world was shocked when Godel effectively proved that Peano arithmetic was necessarily incomplete, which proves my point that humans are generally not great at intuiting results when infinities are involved, even when they're experts.
Nor has anyone shown that Mary's room is self consistent and so the infinity doesn't matter, so we're once again at the place where an intuition pump purporting to prove the existence of qualia does nothing of the kind.
"A p-zombie world in which all of our current philosophers arguing about consciousness are just automatons lacking this "ineffability" is observably indistinguishable from our current world; asserting that this is not actually our world just smacks of assuming the conclusion that this ineffability exists to begin with. "
A zombie world would only look the same objectively. It would feel different subjectively to everyone in it, ie. it would feel like nothing. Ignoring the subjective evidence is amounts to ignoring consciousness, and is itself question begging....The argument is then: there is no consciousness, so we are all zombies alreday, so there is no consciousness.
No the zombie wouldn’t be conscious. If your definition of being a zombie is “something that thinks they are conscious” then that’s me, but then a zombie is indistinguishable from a conscious being.
There’s no science here - I think you are more “philosopher” than scientist anyway - basically all you are saying, in an verbal excrescence is that we can’t prove qualia therefore they don’t exist.
> If your definition of being a zombie is “something that thinks they are conscious” then that’s me, but then a zombie is indistinguishable from a conscious being.
That's the whole definition of a p-zombie. They are physically indistinguishable from so-called "conscious beings", talk, walk and speak exactly the same things as conscious beings, but instead of being conscious and talking about consciousness, they're just completely wrong about their own mental states while asserting they have consciousness.
> Ignoring the subjective evidence is amounts to ignoring consciousness, and is itself question begging.
Only if you count subjective perceptions as reliable evidence. They are demonstrably not.
> The argument is then: there is no consciousness, so we are all zombies alreday, so there is no consciousness.
Not really, it's more like: why posit more entities than are necessary to explain the data? It's more parsimonious if p-zombie world were just our world and we're just mistaken about consciousness, and perceptions are commonly mistaken, so I should epistemically prefer "no p-zombies".
Said another way, what's more plausibly correct: our most successful and effective method of explaining and predicting the natural world, science, or our subjective perceptions which science has proven are demonstrably flawed in innumerable ways?
After all, you can't actually prove that consciousness exists without simply referencing your perception of internal conscious experience. All of the logical arguments purporting to demonstrate its existence are fatally flawed. Unless you can prove it, there is no reason to posit its existence. That said, I agree a mechanistic theory should explain why we *believe* we have consciousness, and I think Graziano's paper which I referenced above is a great start on that.
So I've been reading this back and forth (thanks for having it!) and I feel like I understand the models of both sides and why you seem to be talking past each other, but mb I'm completely wrong! But here's an attempt to explain them. If anyone wants to give feedback on this, I'd be very interested.
I think the implicit disagreement that basically generates the arguments on both sides is what you take as the epistemic starting point. (I'm gonna represent both sides by Alice and Bob here so that if I mispresent someone it's less bad, also one side was argued by multiple people.) So Alice would argue that the way you reason about the world -- not about consciousness specifically, but about everything -- is that you have patterns of qualia and then react to them. That's how you navigate the world; you see qualia of something, like the image of a car in your visual field, and you react to it somehow. So the epistemic chain is something like [qualia] -> [interpretations of qualia based on experience/knowledge/intelligence/whatever] -> [conclusions about the world]. This qualia could be a non-material thing that acts on the body, or another aspect of the material stuff in the body; the distinction actually doesn't matter here.
So consequently, the qualia takes epistemic primacy. That's your *starting point*. Which means any theory has to explain qualia first and foremost. And it's a-priori impossible for science to rule out the existence of qualia because everything science does is *itself* based on qualia. If you're working in a lab, you don't start from the measurement itself; your starting point when you look at the display of an instrument is the *qualia* of the result that appears in your visual field, and then you conclude stuff based on that.
In particular, the argument from the paper,
> (1) for everything we know, there must be information in the brain
> (2) the brain's models are always fallible
> (3) therefore, our "knowledge" that we have qualia is fallible
completely misses the point, as does the illusionist framing in general. The reason is that all of these arguments only show that you can be misled about *what an input means*; that's what an illusion is. None of them show that the *input itself* is non-existent, which is the only thing that matters! Alice isn't claiming that her qualia are evidence for any particular thing in the external world -- such a claim could indeed be wrong! -- she's only talking about the qualia itself, and the argument above doesn't show how that could be nonexistent, and neither does any other argument made here. Again, it doesn't even make sense to her because *everything* you find out about the external world is itself based on qualia; it all starts at the same point.
Meanwhile, Bob identifies himself not with qualia but with his entire body as an information processing system. The qualia thing isn't the epistemic starting point; it's an *output* of the information processing system that is Bob (or anyone else). So conversely, the argument "you need to explain why there seems to be experience" misses the point because, well, the "seeming of experience" thingy is also just an output of the information processing system that is you. So you do have to explain why *the system produces this output*; you have to explain why Alice talks about 'kon-shush-nuhs' but you *don't* have to explain the experience thingy itself, because that's just something the information processing system talks about; it doesn't have to be true.
(This is like Dennett's Heterophenomenology; we treat the output of the information processing system like a fictional story; we assume it's telling the truth but that just means we assume it thinks it has this thing; we don't give the thing it talks about special epistemic status. The part that's confusing to Alice here is that you even model *yourself* from this third-person perspective, sort of.)
So as long as Alice waves her hands and stubbornly repeats that *no you really have to explain this experience thing, it's really there*, all that misses the point for Bob because it all assumes that the qualia is the epistemic starting point, which it isn't; again it's just an output. The only evidence that *would* count is, basically, anything that's valid from a third person perspective. So if we found that modeling an experiential component of qualia actually does a wonderful job explaining human *behavior*, that might be valid evidence. Or if we argue about the a priori *complexity* of a universe with qualia in it, that could be relevant for the prior we can assign to both hypotheses. Or if we can take the qualia hypothesis and use it to predict something about the neuro-anatomy about the human brain, something about how the brain processes high-level information on a functional level, that would be impressive. But appeals to the epistemic primacy of qualia aren't.
Does this sound right? I feel like if it is, then neither side has really provided evidence that's compelling the other side -- understandably so!
I don't think the qualiaphilic side need to lean very far towards the primacy of qualia, so long as consciousness is not ignored. In a way, what is epistemically primary is some notion of experience or perception including , but not limited to, qualia.
"if we can take the qualia hypothesis and use it to predict something about the neuro-anatomy about the human brain, something about how the brain processes high-level information on a functional level, that would be impressive"
But it's still not invalid to say that qualia exist without having novel causal properties , so long as the are identical to something else...if qualia are not.an entirely additional ontological posti, they they do not have to justify their existence with novel causal powers.
Yeah, I took qualia as a standin for "subjective experience of any kind"
> But it's still not invalid to say that qualia exist without having novel causal properties , so long as the are identical to something else...if qualia are not.an entirely additional ontological posti, they they do not have to justify their existence with novel causal powers.
Not novel causal powers, but perhaps causal powers period -- even if they're also explainable in material terms?
I think you have the general contours of the situation. Some people take the primacy of qualia as a given, that it's a new kind of knowledge that simply cannot be be questioned because it can be directly experienced.
This seems to inherently beg the question to me. Scientific study always starts with "working definitions" that serve as a good starting point but require refinement as we develop a more complete, coherent picture of what's going on. We started with the premise that qualia exist because we had no reason to question their existence.
So we did our science and ultimately ended up where we are today, with a scientific picture of the world that is *incompatible with qualia*. This state of affairs *requires revision to our assumptions and basic definitions*, as it would in any other scientific study, and we can devise alternative explanations, like eliminative materialism, that resolve this problem. But because Alice takes qualia as axiomatically true this is not a solution but an absurd kind of heresy, and rather than question the primacy of qualia, she would prefer to question various assumptions like reductionism or materialism.
There is no logical argument or evidence definitively demonstrating that qualia must exist or must have primacy, and the only argument from my side is epistemic parsimony and a recognition that nothing has been as effective as science at explaining the natural world of which we are a part.
Edit: to clarify, in some sense I understand Alice's argument that science is built on qualia and therefore you cannot question qualia with science because that then undermines the very science you're using as proof, so that's self-defeating. The response to this that I've posted a few times now is that qualia are not actually essential to science, you need only perception. A machine learning algorithm with a sensor can do science, so the experiential part is not strictly necessary.
> So if we found that modeling an experiential component of qualia actually does a wonderful job explaining human *behavior*, that might be valid evidence.
Yes, and we do have evidence that qualia are not integral but merely components of an information processing system. Phenomena like blindsight show that humans reliably report objects in their blind spot even when they don't consciously experience it. This is clear evidence that conscious experience and qualia are simply not what we perceive them to be, that they are merely a component of a system.
> Edit: to clarify, in some sense I understand Alice's argument that science is built on qualia and therefore you cannot question qualia with science because that then undermines the very science you're using as proof, so that's self-defeating. The response to this that I've posted a few times now is that qualia are not actually essential to science, you need only perception. A machine learning algorithm with a sensor can do science, so the experiential part is not strictly necessary.
I think in the Alice model, it is possible to do science without qualia, but the evidence that you get from science -- and even the evidence that you are doing science at all -- is again qualia.
Anyway, I feel like this does point to a possible way that the problem could be resolved in principle. Like, Alice and Bob could agree that they can't agree on the epistemic starting point, so they could take the scientific validity of qualia as a crux. It'd be up to Alice to explain (a) how qualia works at all, (b) how a universe with qualia is philosophically simple, (c) how various phenomena like blind sight or the moving color thing from Dennett are compatible with a theory of qualia, and (d) how qualia makes functional predictions about the brain. If she could do all that, it ought to convince Bob that qualia exists after all.
>ultimately ended up where we are today, with a scientific picture of the world that is *incompatible with qualia*.
You haven't shown that.
There's an argument against dualist theories of qualia, based on physical closure, where Qualia would have nothing to.do, and are therefore an unnecessary posit.
There's an argument
against identity theory based on irreducibility. you haven't even mentioned it..
In line with your tendency to ignore identity theory.
So that's two arguments against two theories of qualia. They don't add up to an argument against qualia unless they are exhaustive.
"This is like Dennett's Heterophenomenology; we treat the output of the information processing system like a fictional story; we assume it's telling the truth but that just means we assume it thinks it has this thing; we don't give the thing it talks about special epistemic status"
But "qualia" as such doesn't appear in naive phenomenological reports because it is a philosophical term of art. The naive theory is that colours, etc, are properties of external objects that are perceived exactly as they are. Naive realism, as its called, is unsustainable scientifically because science requires a distinction between primary and secondary qualities. In addition, there are specific phenomena, such as blindsight and synaesthesia , where qualia are missing or unusual. Qualia aren't uniformly rejected by scientists. for all that some philosophers insist they are unscientific.
Objective, scientific data aren't a naive starting point either. Scientific objectivity is has to be trained, and the process consists of disregarding the subjective and unquantifiable -- which has to exist in the first place, in order to be disregarded!
> Objective, scientific data aren't a naive starting point either. Scientific objectivity is has to be trained, and the process consists of disregarding the subjective and unquantifiable -- which has to exist in the first place, in order to be disregarded!
This begs the question. Focusing on the quantifiable and objective is not an implicit assertion that the subjective and unquantifiable exists, it is an epistemic stratagem to focus on that which *can* be quantified *at this time*, and progressively build understanding to a point where the previously unquantifiable can then be quantified.
The opposite is true. Given that everybody has an internal conscious experience then that has to be explained. Any science that doesn’t explain it isn’t a science. In fact it’s just hand waving because we don’t understand the brain.
> If subjective perceptions are not reliably evidence, aren't you knocking out all of science? There is no science if we can't observe reality.
I've responded to this point elsewhere in this thread as well, which I'll reproduce here:
> Science also does not require fully reliable perceptions or senses because it can quantify the unreliability via repeatability, and 1) restrict itself to the narrow domains in which perceptions are reliable, and 2) project measurements from unreliable or undetectable domains into the reliable domain. That's what instruments are for.
But how are you discerning which perceptions are reliable? And even if perceptions are unreliable, there is still the fact that we perceive. Reality could all be an illusion, but the illusion is being presented to someone.
> Said another way, what's more plausibly correct: our most successful and effective method of explaining and predicting the natural world, science, or our subjective perceptions which science has proven are demonstrably flawed in innumerable ways?
sorry to jump in, I'm just curious, does this sentence imply that all [theories of consciousness under which consciousness is real] necessarily contradict science? Like, there's no way to have a theory that posits the existence of consciousness but is consistent with the laws of physics (and hence science/effective explanations/etc.)?
This would sort of mean that a second, implicit reason why you like the Dennett approach is by process of elimination; all the alternatives are bad.
Strictly speaking, no. For instance, panpsychism would not require changing any natural laws to explains things we've seen, it might simply require us to accept that every posited entity carries with it some spec of consciousness, and that natural laws will aggregate consciousness in various unobservable (subjective) ways. Human brains are then an aggregation of consciousness that can finally reflect on and understand consciousness itself.
If you consider first-person consciousness to be irreducible to physical facts, that's probably an elegant way recover science with a somewhat unverifiable component. Seems more plausible to me that we're just mistaken about our own mental states.
Re: process of elimination, in a way, yes. I go into that below in my thread with The Ancient Greek. It's just epistemically more justifiable in so many ways.
But panpsychism is clearly ridiculous. Consciousness is linked to brains. I think I could discourage panpsychists from their beliefs by asking them would they prefer to be shot in the leg or the brain.
The argument only requires that you can introspection subjective states that you can't you can't fully describe. It doesn't require that subjective states are accurate representations of anything beyond that. In part icular, non physicality is not asserted purely on the basis of introspection.
The data include the subjective data, unless you are begging the question by ignoring that.
It would also be parsimonious if only consciousness existed, and matter were an illusion. Parsimony does not imply a unique ontology.
You can't prove matter exists without referencing your own experience.
> The argument only requires that you can introspection subjective states that you can't you can't fully describe.
"Can't fully describe" is just a god of the gaps argument.
> The data include the subjective data, unless you are begging the question by ignoring that.
I'm not ignoring subjective data, I'm saying we have ample reasons to consider it unreliable, therefore we cannot derive any reliable conclusions from it until its reliability is quantified.
> It would also be parsimonious if only consciousness existed, and matter were an illusion.
I disagree. Build a formal model of consciousness and then we have a basis for comparing its parsimony to the standard model of particle physics. We have no such thing, therefore this is no different than saying "god did it". The number of logical properties we must then assign to god/consciousness dwarfs the standard model.
> You can't prove matter exists without referencing your own experience.
"Experience" implicitly smuggles in qualia. I would only agree with the phrasing, "You can't prove matter exists without referencing your own perceptions", because perceptions don't implicitly assert that conscious experience exists.
Consciousness is required to argue the non-existence of consciousness. P-zombies on their own wouldn't suddenly start arguing about the existence of consciousness and qualia without being programmed to do so by some conscious entity.
In fact, the whole enterprise of science depends on our consciousness interacting with our qualia. You might argue, as Nagarjuna did, that consciousness and qualia have no existence in and of themselves, and are instead the emergent phenomena of the interaction of underlying processes—and those processes, when examined will be seen to have arisen from deeper processes—ad infinitum. However, Nagarjuna didn't stop there. He was willing to admit that the "illusion" of mind and qualia (generated by the underlying processes) was as functionally real as the underlying processes.
And invoking parsimony doesn't move your argument along. The Law of Parsimony implies that there is a correct explanation for a phenomenon. Saying there is nothing to explain is not parsimony, it's just refusing to consider the problem.
Also, the Mary's Room experiment has been done. Not that it really helps to resolve the philosophical loose ends...
Thanks! That makes a lot of sense to me. Very in line with Dennett. Also listened to the paper you linked, which fits as well.
(fwiw I totally agree that the two objections you listed are extremely unconvincing. I even consider p-zombies an argument in the opposite direction; if your theory permits the existence of p-zombies, that's a problem.)
One thing I'm wondering, if you're willing to elaborate, is how you square this picture with morality. If qualia doesn't exist, then consciousness either doesn't exist or is just a name for a high-level process; either way there's no actual experiential component to the universe; no "what it is like". (Or do you disagree with this?) This seems to imply there's no suffering. Do you just have a moral theory that works without any conscious states? Can you have suffering without 'what it is like' type experience? Or does it imply ethical nihilism?
I don't think it implies there is no suffering, it simply reframes what suffering is, similar to my solidity example. Solidity is not the absence of empty space, it's just a different property, like the inability to pass another solid through it (roughly); analogously, eliminating ineffability doesn't entail the absence of pain or suffering, pain and suffering are simply understood to be something else, like evolved preferences that avoid damage that harms our fitness. That still sounds like enough to ground a utilitarian ethics to me.
Other ethical frameworks don't rely on preferences or values in the same way so I don't think there's a problem there, ie. deontology or virtue ethics.
Why is this being done? If Dahl's works are so flawed, and so many of the passages need to be edited to the point of losing Dahl's characteristic nastiness and not even being recognizably Dahl any more, why not just toss the whole thing out? What's the point of keeping something so flawed?
The obvious answer is that modern corporations and woke writers are so bereft of genuine creative talent that even a dreadfully unprogressive straight white Englishman born over 100 years ago was creating categorically better art than all these modern 'enlightened' fools could ever dream of making themselves (or at least, if they don't recognize Dahl's actual greatness, they certainly acknowledge the enduring popularity his works have that their own works do not).
The edits made to Dahl's books feel to me like PR stunts that are intentionally stupid in an attempt to invoke toxoplasma of rage. I find it really hard to believe that anybody sincerely thought a book where one of the heroes famously owns a factory run by African slaves could be made to seem progressive by replacing the word "fat" with "enormous".
You're right, I had forgot about that. But even the 1972 version still has them being slaves from *some* foreign location, right? It's just left open-ended what continent they came from originally.
No it had been changed quite a bit I think. The story was that Wonka had "rescued" them from some terrible life of persecution and there was some sort of symbiosis in their working for him in the chocolate factory. But just looking at stuff online about the African version of the story (which I've not read) it also sounds like their was a bit of justification in the story of him "rescuing" them as opposed to capturing them in the way we know that many actual slaves were captured.
I wonder how many of the people protesting about the current sanitisation of the book know about the previous rewrite and whether in hindsight they would think that was a good or bad thing?
I read The Coral Island by RM Ballantyne to my kids when they were under 10 years old. It was written in 1857 and I have a very old copy that was given to my grandfather when he was in Sunday school.
It's a ripping adventure of boys stranded on a desert island and it also contains the N word in a description of marauding cannibals that come to the island.
When we came across that use of the word we were able to have a very useful discussion about it, including the idea of how language changes over time and why words that were thought to be innocuous in one place and time can be hurtful in another context.
Personally I think that simply changing an author's original text is going a bit too far, but perhaps this controversy will at least stimulate a bit of conversation between children and their parents about the importance of context in the use of any and all words.
But editing children's stories isn't a new thing and I remember a similar level of discussion (in the UK at least) when the gollywog character was edited out of the Noddy universe. I'm sure that some people who are on one side of the argument here might have been on the other side in that case.
It might also be interesting to think about why nobody seemed at all put out about terrible film versions of Dahl's stories, or wondering why it's just fine to express his ideas in a rather better musical but without the use of the offending words.
If it's not a case of vanilla censorship but the parallel production of institutionally approved alternative versions of books deemed problematic, then aren't we in "Hegellian Wound" territory a la Žižek?
Eg.: First you have a natural state of things, the original Dahl ideas and writing. Something comes along and disrupts this state by imposing it's own values/agenda - the Updaters - and they inflict a wound on the original. But Hegel comes in and says: wait, this wound is not fatal, actually it is a wound that contains the vehicle for it's own healing and transcendence. See, were it not for the attempt at vulgar re-writing, the original writing would not have a context in which to demonstrate it's own inherent virtue and value to society. The wounding makes the original stronger, in ways that previously were not thought possible.
Who's the "institution" doing the approving? Is it the publisher? If so then every book Puffin Books ever published after requesting changes from the author (i.e. the normal editing process) was an "institutionally approved alternative version"
An alternative answer would be that the updaters believe that Dahl's books lie somewhere in the grey zone between "unsalvageably old-fashioned" and "better than all modern children's fiction", and that the update will help sell more books to woke parents.
Philip Pullman worked very hard in His Dark Materials to make Satan seem like the good guy, and while there is something compelling in his vision, I am suspicious of someone who wants to make the Prince of Lies into a rebel hero, and God into a petty dictator.
I mean, I basically believe the YHVH of the Old Testament is insane, but Jesus still loved Him, so I don't think it's quite as straightforward as Pullman presents.
I could have almost been the inspiration for Augustus Gloop when first reading Charlie And The Chocolate Factory when I was 11.
And yet, I loved the story, and read the also-wonderful James And The Giant Peach shortly afterwards. It never occurred to me to be offended by either book.
I was much more aware of Dahl's edginess when reading his books to my children years and years later.
> What's the point of keeping something so flawed?
If you're the beneficiaries of the Dahl estate, the benefits are obvious.
The whole thing is probably best seen in the context of sacrifice. In this case, the spotlight has swung around to Dahl's privately expressed views about Jews, and a sacrifice was necessary to appease the powers that be. You can't change Dahl's opinions about Jews, but you _can_ change his published books, so you do that, and the spotlight moves on somewhere else for now.
It doesn't matter what the changes are, it just matters that you genuflect appropriately when you're called out.
The copyrights aren't particularly near expiration. AFAICT they don't start till around 2060. Barring changes in copyright law, the originals will go into the public domain at the same time they would have otherwise, even if the bowdlerized version remains protected.
And it will have no effect on the copyright of new adaptations into visual media, which is presumably where the real money is.
The announcement that the originals would be published by a separate imprint came after the outcry, and in response to it.
More, apparently owners of the books in ebook form are also seeing their copies updated, rather than retaining the books they bought or bring given a choice of keeping Dahl's work or the getting the unlabeled collaboration.
The response does seem to have dissuaded Dahl's US and European publishers from following suit with the changes, at least for now.
It may be that Dahl's sales are down, but thus far no one making that claim has presented sales data (that I've seen). Dahl's alleged unpopularity seems to be belied by the fact that they remain in print and keep being adapted into films and major stage productions.
The idea that works whose draw has always been their subversive nastiness will gain sales by being made less nasty at least calls for some evidence.
The discussion surrounding large language models (LLMs) and their relationship to AGI has been utterly horrendous. I believe LLMs and their intellectual descendants will be as transformative to society as the transistor. This technology deserves careful analysis and argument, not dismissive sneers. This is my attempt at starting such a discussion.
To start off, I will respond to a very common dismissive criticism and show why it fails.
>It's just matrix multiplication; it's just predicting the next token
These reductive descriptions do not fully describe or characterize the space of behavior of these models, and so such descriptions cannot be used to dismiss the presence of high-level properties such as understanding or sentience.
It is a common fallacy to deduce the absence of high-level properties from a reductive view of a system's behavior. Being "inside" the system gives people far too much confidence that they know exactly what's going on. But low level knowledge of a system without sufficient holistic knowledge leads to bad intuitions and bad conclusions. Searle's Chinese room and Leibniz's mill thought experiments are past examples of this. Citing the the low level computational structure of LLMs is just a modern iteration. That LLMs consist of various matrix multiplications can no more tell us they aren't conscious than our neurons tell us we're not conscious.
The key idea people miss is that the massive computation involved in training these systems begets new behavioral patterns that weren't enumerated by the initial program statements. The behavior is not just a product of the computational structure specified in the source code, but an emergent dynamic that is unpredictable from an analysis of the initial rules. It is a common mistake to dismiss this emergent part of a system as carrying no informative or meaningful content. Just bracketing `the model parameters` as transparent and explanatorily insignificant is to miss a large part of the substance of the system.
For the sake of sparking further discussion, I offer a positive argument for the claim that LLMs "understand" to a significant degree in some contexts. Define understanding as the capacity to engage significantly with some structure in appropriate ways and in appropriate contexts. I want to argue that there are structures that LLMs engage with in a manner that demonstrates understanding.
As an example for the sake of argument, consider the ability of chatGPT to construct poems that satisfy a wide range of criteria. There are no shortage of examples of such poems so I won't offer an example. The set of valid poems sit along a manifold in high dimensional space. This space is highly irregular, there is no simple function that can decide whether some point (string of text) is on the poem-manifold. It follows that points on the manifold are mostly not simple combinations of other points on the manifold. Further, the number of points on the manifold far surpass the examples of poems seen during training. Thus, when prompted to construct a poem following an arbitrary criteria, we can expect the target region of the manifold to largely be unrepresented by training data.
We want to characterize the ability of chatGPT to construct poems. We can rule out simple combinations of poems previously seen. The fact that chatGPT constructs passable poetry given arbitrary constraints implies that it can find unseen regions of the poem-manifold in accordance with the required constraints. This is generalizing over samples of poetry to a general concept of poetry. But still, some generalizations are better than others and neural networks have a habit of finding degenerate solutions to optimization problems. The quality and breadth of poetry given widely divergent criteria is an indication of whether the generalization is capturing our concept of poetry sufficiently well. From the many examples I have seen, I can only judge its general concept of poetry to well model the human concept (at least as far as poetry that rhymes goes).
So we can conclude that chatGPT contains some structure that well models the human concept of poetry. Further, it engages with this model in appropriate ways and appropriate contexts as demonstrated by its ability to construct passable poems when prompted with widely divergent constraints. This satisfies the given definition of understanding.
>It's just matrix multiplication; it's just predicting the next token
This is not a criticism. This is an explanation.
The criticism is that LLMs repeatedly produce nonsensical or logically incoherent utterances, and can be easily, reliably induced to do so. Those are commonly handwaved by "it's just growing pains, we just need to train them more", or something to that effect. What the skeptics are saying is that, no, in fact, those failures are fundamental features of those models, best explained by the models being just - to use Scott's terminology, if the Gary Marcus's one is offensive - simulators.
When an LLM proclaims that "a house weighs the same as a pound of feathers", it's better not to think of it as a reasoning error, but as a demonstration that no reasoning happens within it in the first place. It's just retrieving common utterances associated with "pound of feathers", in this case, comparisons to "pound of [something heavy]", and substitutes the terms to match the query.
When an LLM says that "[person A] and [person B] couldn't have met, because [person A] was born in 1980 and [person B] died in 2017, so they were not alive at the same time", it's not failing to make a logical argument, it's mimicking a common argument. It can substitute the persons' actual birth/death dates, but it cannot tell what the argument itself, or the concepts within it, represent.
And look, the argument may be wrong, you're free to disagree, but you need to actually disagree. You're not doing that. Your entire point boils down to, people are only saying [that one line you cherry-picked from their arguments] because they fail to understand basic concepts. Honestly, read it, it does. Now, if you want the discussion to be non-horrendous, try assuming they understand them quite well and are still choosing to make the arguments they make.
Not an explanation, but rather a description. People treat it as an explanation when it is anything but, as the OP explains.
>When an LLM proclaims that "a house weighs the same as a pound of feathers", it's better not to think of it as a reasoning error, but as a demonstration that no reasoning happens within it in the first place.
Failure modes in an LLM do not demonstrate a lack of understanding/reasoning/etc anymore than failure modes of human reasoning demonstrate a lack of understanding/reasoning/etc in humans. This is an example of the kind of bad arguments I'm calling out. It's fallacious reasoning, plain and simple.
>What the skeptics are saying is that, no, in fact, those failures are fundamental features of those models, best explained by the models being just - to use Scott's terminology, if the Gary Marcus's one is offensive - simulators.
The supposed distinction between a reasoner and a simulator needs to be demonstrated. The "simulated rainstorm doesn't get me wet" style arguments don't necessarily apply in this case. If cognition is merely a kind of computation, then a computer exhibiting the right kind of computation will be engaging in cognition with no qualification.
>but you need to actually disagree. You're not doing that.
I'm disagreeing that a common pattern of argument does not demonstrate the conclusion they assert. That is a sufficient response to a fallacious argument. Now, there's much more to say on the subject, but my point in the OP was to start things off by opening the discussion in a manner that hopefully moves us past the usual sneers.
>Failure modes in an LLM do not demonstrate a lack of understanding/reasoning/etc anymore than failure modes of human reasoning demonstrate a lack of understanding/reasoning/etc in humans.
Failure of human reasoning does in fact demonstrate lack of understanding in humans.
I mean, I realize what you're actually trying to say - that an individual failure of an individual human does not disprove the potential for some humans to succeed. But that's exactly the fundamental issue with your line of argumentation - you're assuming the discussion is philosophical (and a bunch of AI specialists literally doesn't understand the concept of emergent behavior, etc.), while it's actually empirical. Nobody denies neural networks can exhibit [whatever marker of general intelligence you choose], because proof by example: human beings. The whole disagreement is about whether the actually existing LLMs do. And, further down the line, whether current direction of research is a reasonable way to get us the ones that do. (I mean, to reuse your own metaphor, you could, theoretically, discover working electronic devices by connecting transistors randomly. It does not constitute a denial of this possibility to claim that, in practice, you won't.)
>It's just matrix multiplication; it's just predicting the next token
This is as uncompelling a response as "computers are just flipping ones to zeroes or vice versa, what's the big deal?"
> The key idea people miss is that the massive computation involved in training these systems begets new behavioral patterns that weren't enumerated by the initial program statements.
Yes, I'm not sure why this isn't obvious. There's an adage in programming, "code is data". This is as profound as the equivalence between energy and matter. LLMs and other learning models are inferring code (behaviour) from the data they're trained on. In fact, a recent paper showed that a transformer augmented with external memory is Turing complete.
So basically, learning models could learn to compute *anything computable* if exposed to the right training set. What's particularly mind boggling to me is that it's often people familiar with programming and even learning models that are overly dismissive.
Turing Completeness is a low bar, a single instruction (subtract and branch if zero) can be turing complete. Code might be data, but not all data is code. Data generated by code under-speifies it : No amount of "2 + 2 = 4; 2 + 3 = 5; 2 + 4 = 6;..." is going to teach you Add2(Int,Int), unless you're already equipped with the ability for inductive symbolic reasoning.
Look, I want to be proven wrong. Unlike most online tech discourse, I'm not an AGI pessimist. Even if AI slaughters us all to the last 1-minute-old baby, that's not much different than what we do to animals anyway. We always talk about the "Paperclip Maximizer" as if it's inherently ridiculous, as if it's self-evident that its desires are not worth the death of all humans. Then we turn around and slaughter hundreds of innocent animals per human per year to satisfy a desire that we can live without. Why is that inherently more fair than what the Paperclip Maximizer wants ? If intelligence grants you a moral blank check, as most humans seem to believe, you don't get to complain if somebody smarter comes and slaughters you for what you see as a ridiculous reason. You're not intelligent enough to judge anyway. We already have super-human intelligences that serves humans, and they are called corporations and states, and they suck. It would be a breath of fresh air to get an intelligence to replace us.
That was a somewhat dark detour, but I just want to establish that I'm not in denial for the obvious reasons, I want an intelligence greater than humans to be made, preferably without consciousness, I want the great mystery to be solved even if the immediate consequence would be "Game Over".
But LLMs are so far from that. It's so utterly trivial. How hubris-filled you must be to think that you can solve a problem like General Intelligence after just 70 years ? and with something as basic as Self-Attention ? Text is enough for intelligence ? Everything about this is screaming "not going to happen", this is just like the 1960s all over again, you solve some impressive problems (Chess, shape recognition, word problems), then you promise the world, then you plateau. A tale as old as time.
> No amount of "2 + 2 = 4; 2 + 3 = 5; 2 + 4 = 6;..." is going to teach you Add2(Int,Int), unless you're already equipped with the ability for inductive symbolic reasoning.
What you're saying is that no learning algorithm (e.g. gradient descent) will land in a region of parameter-space that encodes the addition algorithm. This is very likely false. This paper[1] demonstrates a modified Transformer architecture that learns an addition algorithm that generalizes to input lengths beyond what was seen during training. While the network doesn't demonstrate perfect generalization, that it generalizes to a substantial degree suggests it has discovered a general algorithm. Physical limitations in terms of circuit length will always be a factor in failure modes.
>As shown in Figure 4a, I-BERT is the only model that successfully extends the addition rule to longer input sequences, while other state-of-the-art Transformers completely fail to do so. The result is expected given the nature of the carry rule in addition. Because carry digits require at least O(log(n)) steps to be determined, it is impossible for typical Transformers with O(1) compute paths to determine them.
>What you're saying is that no learning algorithm (e.g. gradient descent) will land in a region of parameter-space that encodes the addition algorithm.
What I'm saying is that it's impossible or very hard to find a region that simultaneously learns perfect (till 4 or 5 or 6 digits, easily what humans can do) addition as well as being good as language. And if you did find this region then it will be even harder to additionally make the AI good at (say) Python beyond the typical "throw shit at the interpreter and see what doesn't make an error", and so on.
What I'm saying is : The Supervised Learning paradigm sucks at learning things that already have nice symbolic compressions. It's *extraordinarily* inefficient, and this holds across every single domain it has been applied too. CNNs need 10 gajilion pictures of a car to recognize a car, a 5 years old needs 10 (max). LLMs need hundreds of gigabytes of text to speak like a typical internet commenter, a typical 20 years old have only seen - I'm wild guessing here - 0.1 gigabyte or so ? barely so, and at 1/1000000 the energy or less. There is not enough data\energy\compute in the entire world to make current-paradigm AIs smarter, and right now they are fairly dumb, and the scaling is a logarithm-ish diminishing return curve instead of the exponential that Ray Kurzweil worships.
Scale matters, doing something once doesn't imply you can continue doing it indefinitely. Charles Babbage invented a Turing-Complete computer in the 1840s, and historians say it's likely that he very well could have built it with the tech of the time given enough money and patience. But Babbage would have never got to the personal computer, there are simply too many leaps between "A computer" and "Personal Computer".
LLMs are like Babbage's computing machines : Bulky, Inefficient, Solves Age-Old Problems Automatically But Needs Billionaires To Operate. They can still revolutionize and shake up the shit out of us and our socieites, I don't deny that, just like Babbage's machine almost certainly would have revolutionized the shit out of 1800s England and Europe in general. But the machine itself is meh, a fairly ingenious innovation on an evolution of Attention-Augmented RNNs, not an Earth-Shattering SkyNet.
> No amount of "2 + 2 = 4; 2 + 3 = 5; 2 + 4 = 6;..." is going to teach you Add2(Int,Int), unless you're already equipped with the ability for symbolic reasoning.
An interesting conjecture, but we thought much the same about natural language processing for decades, yet here we are with associative models conversing fluently in natural language. The way we conceptualize things might not reflect how things actually are.
> But LLMs are so far from that. It's so utterly trivial. How hubris-filled you must be to think that you can solve a problem like General Intelligence after just 70 years
How hubris-filled you must be to think that general intelligence is actually something special! For all we know, "general intelligence" could just be a series of simple tricks.
The fact is, we don't know what underlies general intelligence. We could just be a few simple tricks and a few years away, or we could have a long slog through iterations or formal modelling over decades. All we can do is point out some trends, like how language model error rate scales with various factors, how hardware and learning algorithms are both scaling exponentially, and infer that we might achieve something indistinguishable from general intelligence much sooner than we think.
I think you're streching the term "fluently" past its reasonable definition. We don't consider the "word salad" speech of someone with brain damage to be "fluent" speech, just because it scans and parses, and fits all the rules of grammar. "Fluent" speech must *also* conform to logic, or at least be logically self-consistent, and LLM speech is not that. It is perfectly capable of contradicting itself between answers, and even within the same answer. Indeed, it acts a lot like a person who *was* capable of fluent speech, but had a stroke or brain injury of some curious kind.
You are way overstating actually. The overwhelming majority of ChatGPT's output is more fluent than the output from the vast majority of humans in their native tongue; if people speaking in their native tongue are considered fluent, then so should ChatGPT.
It has some failure modes in reasoning for sure, but no failure modes that aren't also prevalent among humans of below average IQ (ChatGPT's IQ was estimated at ~80, for whatever that's worth).
Despite those failure modes, its output is still "fluent" by the dictionary definition of the term, which is that its output is polished and flows effortlessly.
Edit: to be clear, "fluency" is not necessarily connected to "logical consistency" or correctness. I think ChatGPT's responses just have to be polished in a grammatically correct sense, and they have to be directly relevant to the context of the conversation to qualify as fluent.
Yeah I disagree with all of that. I've had ChatGPT make mistakes no 6-year-old would make. And I'm not impressed with "the majority of the time" (whether "overwhelming" or not, which is a subjective opinion). That it does it at all is fatal, in my opinion, since it indicates a lack of an underlying "make sense" mechanism. It's basically just taking all the low-hanging fruit, the stuff that can be done by some stupid rule.
And the fact that a stupid rule works for a large part of human conversation is also underwhelming. Not *everything* we say exhibits our intelligence -- but it doesn't need to, any more than *every* movement of an elephant demonstrates its enormous strength.
However, the converse *does* need to be true. We know an elephant is strong not because *everything* it does requires strength, but rather because nothing it does exhibits weakness. In the same way, we know humans understand language not because *everything* they say exhibits deep comprehension and consistency, but because nothing they say exhibits gross inconsistency and dumb logic errors.
This is what's missing from the chatbots. They don't have any "floor" of idiocy below which they never fall, and humans do.
Edit: this doesn't mean I think the beast is useless (although in fact I mostly do, but for oher reasons), just an illustration of why I think inferring intelligence is unsupported by the data.
Look, I'm an LLM skeptic, and I agree that "Matrix Multiplication Loool" is a dumb objection, I would have accused you of strawmanning but unfortunately that objection is a considerable percentage of LLM skeptics' discourse on forums like HackerNews, and it resembled my own objections in past years when I was much less informed about AI than now.
Matrix Multiplication isn't even the fundamental idea, the fundamental idea is Universal Function Approximation : Get a black box with many (and I mean M A N Y) knobs and switches, where each setting of the knobs and switches makes the black box behave differently, then give it to a search algorithm and tell it to find the "best" setting of knobs and switches, the one that makes the black box behave most optimally, as measured by some error metric. A huge variety of tasks and goals can be described this way, just searching some space of all possible functions for one that will transform input to output most optimally. Matrix Multiplications just so happens to be a black-box-with-knobes-and-switches that we can implement efficiently in hardware, and Gradient Descent just so happens to be a search algorithm that converges very quickly if certain (fairly strong) assumptions are made about the error metric, but any other black-box-with-knobes-and-switches with any other search algorithm will in principle produce the same if not better results than traditional ML.
So we agree on that, "LLMs are just Matrix Multiplication" is about as intelligent an argument as "Human brains are just a big bio-electric circuit". What we don't agree on is the promise of such systems. The future rate of change.
General approaches like Universal Function Approximation are seductive as hell. The pro view is articulated most strongly and succintely in Rich Sutton's The Bitter Lesson (http://www.incompleteideas.net/IncIdeas/BitterLesson.html), where he argues that Search and Learning beat Domain-Specific Knowledge any day of the week, any week of the month, any month of the year.
The problem with this view is that it assumes "Scaling" as a magic white horse that Search will ride to battle and save us all. This seems to mismatch reality, how much more can you scale GPT et. al. than the several thousands of GPUs it was presumably trained on for months ? how much more can you scale their training set beyond "Literally every single gigabyte of text we can put our greedy gropy hands on" ?
Maybe you are now thinking "OK, scaling is slowing down indeed, but LLMs are already mostly there, eh? it doesn't need much scaling", and I would violently disagree. LLMs are not there by a long shot. It can do elevator smalltalk, it can translate and summarize impressively, yes, but it fails the most basic common sense tests. More disastrously, it fails to **Learn**. LLMs are themselves the product of a Learning (== Fancy Search) Algorithm, yes, but the final model itself has no notion of continuous learning. It makes mistakes and *insists* that they are the correct answers.
-----------------------------------------
A similar thing happened in Computer Architecture, in the 80s and 90s plenty of people tried a huge variety of new computer architectures (the transputer, Lisp Machines, The Connection Machines), every one of those architecture, despite innovating hugely on a vanilla cpu, were eventually beaten by the extremly vanilla x86 architecture. This is because the 80s and 90s were the golden age of a powerful scaling law : Moore's Law. Whatever innovations that any new architecture tried to in order to get some performance cleverly, x86 would just crush it 2 years later by the sheer stupid force of more transistors.
But fast forward to now, Moore's scaling is having its last breaths. Transistors are now of the size where **Quantum Tunneling** is a problem, if you can imagine that. There is a huge bag of tricks still up the fabrication labs sleeves, I'm sure, but even now, a new landscape of innovation in chips and special-purpose designs are flourishing, for example in the AI hardware space. (https://medium.com/@adi.fu7/ai-accelerators-part-iv-the-very-rich-landscape-17481be80917).
-----------------------------------------
What I want to say is : General-Purpose algorithms that assumes nothing about the problem are enticing and attractive when Scaling is feasible, and they do work spectacularly well ***as long as*** Scaling keeps working. But if (and when) Scaling ends, you stagnate fast, and you call for Special-Purpose tricks and Knoweldge-based innovation to save you. Brute Force is great until there isn't enough of it, then you have to resort to actually thinking your way through the problems.
There is some truth to the joke about how mathematics developed because humans are laughably bad at arithmetics, imagine if we were good at brute-force arithemetics. We would lose so much, we would be so much poorer and deprived of the vast rich worlds of mathematics, our weak computational abilities were such a gift, they spurred us to develop deep theories full of special tricks and patterns to circumvent them, and this can utlimately solve problem that would require gargantuan or infinite amounts of computation to prove (e.g. Fermat's Last Theorem). Knowledge-based approaches yaaay, data-based brute forcing boooh.
Ultimately, LLMs are hopeless because they take thousands of gigabytes of text and millions of hours of compute to talk like a mediocre college student with a heavy dose of internet sources and a fertile imagination coupled with a reluctance to admit errors (and woke as hell, but that's just the "23 years old college student" part). That's fundamentally just not going to work, a 23 years old human has been alive for just over 200 thousand hours, and the average intelligence of one of them can run reasoning and text comprehension circles around even the most sophisticated LLM. At this rate you're going to need an entire earth mass of computers and several more centuries of text data to be a Shakespeare. Scaling will fail you very soon and much much earlier than you think, leaving you stranded and asking for the rules-based approaches and domain-specific ideas.
Intelligence is hard, and consciousness is even harder. I know nothing about these 2 things, some people know a good deal more about the first, but most people are about as confused as me about the second. I'm not a human chauvinist and I have not a single reason to believe we're special in any other way that can't be replicated bigger and better outside of the human body or on a different material substrate entirely, etc, etc, etc... LLMs are the state of the art in Natural Language Processing, and perhaps they are going to revolutionize society, that's not a high bar, cars and microwaves also revolutionized society. What I'm saying is : all of those things are true, but LLMs are still a disappointment intelligence-wise, the probabilistic function approximation paradigm is so blank-slate, it spends so much learning what can be gleaned from a single example or 2, it will never get anywhere interesting under any reasonable resource consumption limits. The revolution is going to come from automation, it does indeed understand language **better than any other thing that can run on a computer**, and that have some consequences. But there is nothing to be learned about intelligence or cognition from studying LLMs.
>LLMs are not there by a long shot. It can do elevator smalltalk, it can translate and summarize impressively, yes, but it fails the most basic common sense tests. More disastrously, it fails to **Learn**.
This is a common problem in how we evaluate these LLMs. We judge these models against the behavior and capacities of human agents and then dismiss them when they fail to replicate some trait that humans exhibit. But this is a mistake. The evolutionary history of humans is vastly different than the training regime of LLMs and so we should expect behaviors and capacities that diverge due to this divergent history. People often point to the fact that LLMs answer confidently despite being way off base. But this is due to the training regime that rewards guesses and punishes displays of incredulity. The training regime has serious impact on the behavior of the model that is orthogonal to questions of intelligence and understanding. We need to evaluate them on their on terms.
Regarding learning specifically, this seems to be an orthogonal issue to intelligence or understanding. Besides, there's nothing about active learning that is in principle out of the reach of some descendant of these models. It's just that the current architectures do not support it.
>General-Purpose algorithms that assumes nothing about the problem are enticing and attractive when Scaling is feasible, and they do work spectacularly well ***as long as*** Scaling keeps working.
Sure, it may be the case that practical limits of scaling puts the brakes on further significant improvements. But this isn't an in principle limit, which is relevant to what this class of models can do in principle. The benefit of general purpose algorithms is that it can self-discover circuit patterns that solve brand new problems that weren't intended by the designers. This is one of the reasons why I'm optimistic about the development of AGI despite a lack of theory about intelligence and sentience.
>Ultimately, LLMs are hopeless because they take thousands of gigabytes of text and millions of hours of compute to talk like a mediocre college student with a heavy dose of internet sources and a fertile imagination coupled with a reluctance to admit errors
I'm not sure this argument really holds water when comparing apples to apples. Yes, LLMs take an absurd amount of data and compute to develop a passable competence in conversation. A big reason for this is that transformers are general purpose circuit builders. The lack of inductive bias has the cost of requiring a huge amount of compute and data to discover useful information dynamics. But the human has a blueprint for a strong inductive bias that begets competence with only a few years of training. But when you include the billion years of "compute" that went into discovering the inductive biases encoded in our DNA, it's not clear at all which one is more sample efficient. Besides, this goes back to inappropriate expectations derived from our human experience. LLMs should be judged on their own merits.
>We judge these models against the behavior and capacities of human agents and then dismiss them when they fail to replicate some trait that humans exhibit.
I'm not dismissing them, I swear! I'm really impressed, I'm doing a hard update from an earlier position of "Meeh, all those fucking text manipulation engines are worthless", it was mostly a reaction from people around me constantly fawning about CoPilot's abilities and treating it like magic (in programming of all places, one of the least error-tolerant walk of life, where bullshitting truly gets you worse than nowhere), but I recognize now that was unfair, and I'm correcting. I'm specially impressed with the summarization and translation aspects, every single other task they do seem of dubious value to me, "how much is 123123+3534541" and "how much bigger Russia is than France" and things like that, people seem too stuck in the naive half-century of sci-fi from 1940s to 1990s where we imagined AIs as general purpose genies that can do anything. But tasks like "give me a quick start in Python" or "Summarize the pros and cons of restaurants near me" or "tell me how to say 'LLMs don't truly understand shit' in french" are met with fantastic results, and this is exciting as heck, computers never understood language so well before. I have read articles like https://www.lesswrong.com/posts/ZtMsyMP5F7zzP8Gvc/reader-generated-essays and https://thesephist.com/posts/latent/ and https://www.geoffreylitt.com/2023/02/26/llm-as-muse-not-oracle.html and I worry now that I will *over-update* towards LLM hype, the exact thing my earlier unfair position was intended to gaurd against.
If the above is all you are claiming, then we don't disagree. Sorry I spent thousands of words to say that we agree, oopsie daisy. What I'm against is trying to say anything more than that, musing that mixes "LLM" & "intelligence" together just irks me (unless it makes it very clear the "intelligence" part is completely and utterly alien, for example Janus' post about simulators and Scott's response.) Worrying about "aligning", already something that annoys me enough in general because of <another 10 thousand words of why I hate AI alignment>, annoys me even more when it's LLMs that we're worrying about. (again, with some exceptions, the discussion about how LLMs can enable propaganda is interesting, about how LLMs can enable students cheating and general workplace bullshitting too.)
What can I say, LLMs are *just a tool*. They are exciting, they solve fundamentally new problems, and everybody is trying to solve everything with them right now. But they are not and never going to be more than that, they are not agents, they don't accelerate our 100+ year (at least) slow march towards general AI by much, they are going to be forgotten in 10 years. (for a shiny new thing even more hypy than current LLMs)
>The evolutionary history of humans is vastly different than the training regime of LLMs and so we should expect behaviors and capacities that diverge due to this divergent history.
Yup, checks out. Disagreement.exe not found.
>Besides, there's nothing about active learning that is in principle out of the reach of some descendant of these models
Hmm ? But doesn't all neural networks make a dichotomy between "Training" and "Inference". Training is when you're walking the parameter space to find a cozy low-loss configuration, Inference is when you permanently settle down and use the configuration you found forever to make predictions. The inability to learn while inferencing seems pretty damn fundamental and "in principle" to me, to the point that any NN architecture or any other AI model that can bypass it will earn the title of "radical innovation" in my book.
> But this isn't an in principle limit
Yes it is. Computational Complexity is a big deal, if it wasn't then here's a program that can output the answer to all the secrets of Life, the Universe and Everything :
- For all numbers in the range 1..Infinity
--------- generate all possible ascii strings of that length
--------- print it
Laugh as you want, but the above is basically the general idea of Solomonoff's induction. I believe AIXI (http://www.hutter1.net/publ/aixiaxiom2.pdf) is an approximation/relaxation of it but I don't really understand so I'm not sure, but even it is infeasible. The problem that everybody who has heard of computer science knows, P ?==? NP, is not about "in principle limits", it's about complexity and practical(-ish) limits.
There is only so many GPUs and gigabytes of text in the world, eh ? when you consume so many of them just to talk like a generic internet commenter, what's left ? even a 100% improvement will just make you a very good internet commenter, which is still underwhelming compared to the fears and speculations people have about LLMs. Are you getting my flow here ? There is a finite amount of computation, energy, money, data, AI researchers, etc... on Earth, and LLMs guzzles through them like a fuel hungry cadillac, and converts them to medicore-to-moderately-good performance. There is not enough left to scale. (and the thing is - I believe - diminishing returns : 1000000K GPU-hours will take ChatGPT from pure random noise to a generic internet commenter, but another amount or even 2x as much won't take it from a generic internet commenter to a genius polymath.)
>But when you include the billion years of "compute" that went into discovering the inductive biases encoded in our DNA, it's not clear at all which one is more sample efficient.
Precisely ! LLMs are not powerful enough, the entire paradigm of "Searching weight space" is not powerful enough. Just like we thought Propositional Logic was powerful enough in the 1960s and 1970s, but turns out it isn't. Just like we thought decision trees was powerful enough, but turns out it isn't. The history of AI is a cycle of : New Technique -> Really exciting, works much better at previously-unsolved tasks than the competitors -> People are really hecking excited, trying it everywhere and promising AGI left and right -> New Technique saturates, its limits are discovered, and they are far below AGI -> People hate AI now, think it's a field full of scammers.
Every. Single. Time.
In reality, Evolution is arguing on the side of the LLM skeptic. Look at the sheer wonders it produces, the aerodynamics of the Housefly that amazes human engineers, the alien intelligence of Slime Molds, Evolution made mind control for heaven's sake, it's a creativity machine that never tires. This infinitely creative process took tens of millions of years to produce humans. Knowing this, does general intelligence look to you like a problem that can be solved in under 100 years of thinking and experimenting ?
ChatGPT is, once you get past the initial ability to generate code at all, not that useful - except for generating data. It’s confidently given me perfectly working code that’s perfectly wrong.
This was great. Thanks. I am much impressed by LLM's, they certainly write better than I do. But they also are just a word thing. They have no model of the world. Does alphafold know any quantum chemistry, hydrogen bonding? My limited understanding is it's just learned about how *a lot* of proteins fold up, and so can predict a new one. Which is damn impressive, but there's no understanding under the hood.
> how much more can you scale GPT et. al. than the several thousands of GPUs it was presumably trained on for months ?
Advancements in machine learning algorithms have been outpacing advancements in hardware for over a decade now. I think skeptics like yourself seriously underestimate this factor. For instance, RWKV-LM [1] is considerably simpler and faster to train and run and requires less VRAM, but matches the GPT scaling curve. It might even be able to run on your phone soon. Did you anticipate that this would be possible within a year of GPT-3 being released?
> More disastrously, it fails to **Learn**. LLMs are themselves the product of a Learning (== Fancy Search) Algorithm, yes, but the final model itself has no notion of continuous learning.
I don't see why that's disastrous. Continuous learning wasn't even a goal when designing these. That it fails at doing something it was specifically not designed to do doesn't tell us anything. What's remarkable is what it can still do despite its limitations.
Model distillations might be one way I turn out to be wrong, and that would be interesting. Unlike Evolution-made intelligences, Artificial Intelligence is malleable, it can be obscenly luxurious when it first discovers its parameters and structure, then settle down and shake itself up to affordable levels of resource consumption. It doesn't have to worry about a body or its survival and that opens up lots of interesting avenues.
>I don't see why that's disastrous.
For one thing, it doesn't match our intuitions about how intelligence *should* work. Intelligence *should* always be in adaptability mode, always "adjusting weights" so to speak. There is a notion of right and wrong, and there is a notion that right is desirable and wrong is undesirable and avoidable. None of this fits in the LLM worldview, there is only probabilities, and things are high probabilities or low probabilities, not right or wrong. "No I'm not wrong" is just as probable as "Yes, you're right, that's wrong" as responses to "You messed up, 2+2 is not 5".
LLMs are remarkable indeed, I have been coming around to appreciate them from an earlier position of extreme dismissal. But, like I say at the end of my extremly long parent comment, there is nothing remotely interesting about intelligence or cognition happening inside the beast. There are maybe lots of interesting things about its internal structure and how its bizarre reasoning works, but none that will help us build better intelligences or understand our own better (except in the trivial sense of showing us that LLMs are defintely not the way to achieve any of those goals).
The above matters because it puts a hard ceiling on what LLMs can do, it can only do things which can be phrased as text manipulation tasks. Plenty of very impactful things are this kind of tasks : it can write speeches for dictators, it can write custom erotica novels for millions of depressed young guys and recruit them to a terrible cause, it can do lots of damage indeed, and lots of good things. Text is literally what Civilization was built on. But, that's about it. It can do nothing other than Text. But true general intelligence can do so much more, and skeptics like me are disappointed by this, and annoyed by the people who are excited about the very real possibilities.
> None of this fits in the LLM worldview, there is only probabilities, and things are high probabilities or low probabilities, not right or wrong.
Because "right" and "wrong" are typically fictions or shorthands corresponding to "high confidence in a high probability event" to "high confidence in a low probability event", respectively. There is really nothing I can think of that I can assert with 100% confidence, except that I exist in a strictly limited sense.
Do you have some epistemic model that at some fixed confidence threshold, you can suddenly transition a probability into a certainty?
> there is nothing remotely interesting about intelligence or cognition happening inside the beast.
It's interesting that you infer from this that LLMs are not intelligent because they're not remotely interesting, rather than inferring that maybe they are intelligent but there isn't anything "remotely interesting" about intelligence itself, per your assessment.
That's why I can't really accept your extrapolation that LLMs are thus strictly limited. You're just assuming that there must be something more to intelligence, but I think the long history of humans wrongly assuming they're special should strongly bias us into assuming we're not nearly as special as we think.
My personal opinion is that there probably will be some changes to make LLMs more compact and efficient, but not nearly as much as you think, and that LLMs in 10 years will likely not be distinguishable from general intelligence for nearly anyone except experts that may know some tricks to trip them up. But tricking humans is so trivial that we publish books with optical illusions and have whole fields of entertainment dedicated to them, like magic.
> But true general intelligence can do so much more, and skeptics like me are disappointed by this, and annoyed by the people who are excited about the very real possibilities.
Can it? This is again, conjecture. Only time will tell!
Neuro-symbolic approaches seem to promise getting around the limitations of function approximation, by allowing the kind of context switching which general intelligence engages in when the stochastic parrot isn't working well anymore. The big question is whether any of the symbolic approaches are embarrassingly parallelizable. If not, then Amdahl's law lurks in the shadows ready to curtail FOOM. If yes, then Yudkowsky style pessimism might be warranted.
Sure, but it's also a common mistake for human beings to anthromorphize mechanisms, and to misattribute agency and consciousness to complex processes. Children and primitives think the wind and ocean and thunderstorms have intention, that rocks and trees are aware and might be trying to communicate with us, and even I tend to think my car keys malevolently hide themselves just to annoy me, which causes me to yell at and threaten them.
What you would need to do, for this critique to be taken more seriously by skeptics, is to dive into why the potential mistake you have identified, which skeptics might be making, is much more likely than the potential mistake I have identified, which enthuisasts might be making.
My goal isn't to argue that current LLMs are conscious (I don't think they are, with high but not complete confidence), but to argue that we need a careful analysis to begin to develop good reasons to think they are or are not conscious. If we just default to thinking them not conscious without argument then at such time they or a descendant do become conscious we will very likely be totally blind to this fact. By the time most people agree that they are conscious is far too late to begin to engage safety protocols and have all the ethical discussions. To avoid moral and existential disaster, it is imperative that our reasons are sensitive to the right features of these models.
OK. But here's the thing: there are always way more things to worry about than we have resources to deflect or defend against. Maybe global warming will wipe us out. Or an asteroid strike. It used to be nuclear war, then that fell out of favor, and then had a brief renaissance earlier this year before fading back to 20 minutes before midnight as long as the Ukes don't HIMARS the Kremlin. When AIDS came out of Africa everyone thought a new Black Death might be the #1 thing to fear, and against which to prepare, and COVID briefly scared people along the same lines.
And so on. There are just too many hypotheticals that could ruin our species' day to pay close attention to them all. So what's helpful is to have some kind of early warning signal, some way of more definitely saying "oh hey THIS doom is becoming a lot more plausible, so we better pay attention."
What kind of early warning signal can you imagine that would tell use that the threat of malevolent thinking beings of our own design might be a lot more concretely plausible than it is now? What would be the canary in the coal mine?
For example, does the existence of ChatGPT, and its characteristics, make this doom more or less plausible, and why? My own feeling is that it says nothing either way, because I don't think the ability to simulate human conversation is any significant marker of genuine creative thought. If nothing else, plenty of highly creative human beings are terrible at fluent conversation, and plenty of rather dull people can make cocktail chit-chat endlessly -- so I'm dubious of any reliable connection between the two phenomenon. But what do you say?
> What kind of early warning signal can you imagine that would tell use that the threat of malevolent thinking beings of our own design might be a lot more concretely plausible than it is now? What would be the canary in the coal mine?
How about: in one iteration, chatbots go from being laughably incompetent at both language and ordered thought, and upon more interaction tend to lose whatever "humanness" or sentience they may have briefly seemed to have if you squint and willingly suspend disbelief as if you are reading fiction -- to suddenly, out of the blue, chatbots seem coherent, understand grammar, follow logical trains of thought multiple layers deep, and appear to have extremely distinct, recognizable personalities?
And the kicker: the hackers that made this work just basically took the stupid, clearly-non-sentient chatbots of a few years ago and "literally" put them on massive steroids. Which sounds suspiciously like what evolution apparently accomplished between say lizards and proto-humans...
That would be great, if I agree with you about the qualities of the current chatbots. But I don't. I've explored ChatGPT and GPT-3 a fair amount, and they are only superficially and briefly coherent, they can't follow logic at all except by accident, or when it can be inferred from linguistic rules, and I detect no personality at all other than the painted-on layer of customer-service obsequiousness that can be seen in the generic recorded voice responses of a customer service line phone menu.
Which is to say, I think people who *do* find all these qualities in the current generation of chatbots are almost certainly making the anthropomorphization error I mentioned at the top. It's like someone watching Asimo[1] and thinking there's a person in there because it walks so lifelike.
I more or less agree about chatGPT. I think Bing Chat (pre-nerf) is another kettle of fish. (no need for links at this point, google bing sydney).
I also see public comments from people I respect (Scott Aaronson, Blaise Aguera y Arcas) who are inside these companies working with the newest, unreleased stuff -- and they seem to be seriously astounded.
I made an anonymous, 4-question survey to get people's opinions on the feasability of world peace. Would greatly appreciate responses: https://forms.gle/Q5L7dub51dJZZEB57
I think it would benefit from some more clarification. E.g. what does it mean for something to be "attainable in the present?" Isn't it trivially true that if all behavior that is the opposite of world peace ceased, that world peace would have been achieved?
Is the question, then, a forecasting question about the future? What then does it have to do with the present any more than any forecast?
And what do you mean by world peace anyway? Do you mean an annual number of deaths from war below some threshold? Something else?
As evident there, in the 1960s-1980s, such deaths killed around 5 people per hundred thousand. In recent decades, they have killed about 1 person per hundred thousand.
Thank you making this point. I phrased it with a mention of the "the present" to distinguish between, "attainable given unforeseen far-future technology" and "attainable given conditions as they exist in the current year".
The survey uses the term world peace to mean a cessation of all armed conflict between governments, measurable as zero annual wartime deaths. Violent crime deaths would not be counted, and could still happen.
While the AI is, as usual, getting some facts wrong, the fact that it can look up relevant stuff on the internet and integrate them into its answer is pretty impressive. Like, the ability to go "User asked about a Roman legion in a German forest in 9 AD" -> "User is asking about the battle of Teutoburg Forest" -> "Generate a paragraph explaining the situation, advice on time-travel ethics, and how to say 'watch out for the ambush' in Latin" is a pretty nice piece of multi-step reasoning. It's probably about as helpful at answering questions as a random person with access to Google.
Maybe it isn't all that useful practically, since I *am* a random person with access to Google, but it's pretty good for a computer.
What I do like is that Bing Chat seems genuinely concerned about this particular customer's predicament. Much more so than real human customer service reps.
Maybe this is the real killer app of AI chat -- emotional labour. A human CS rep can't possibly pretend to care about the problems of every customer who calls up, eight hours a day, five days a week. But a chatbot can. You could have a system where the CS rep types in the correct answer to the customer's query, and the AI translates that into a more caring form.
Parenthetically, I would just like to express extreme contempt for the modern customer-service training standard of emphasizing "caring" over (it appears) any other function. I am absolutely full up with CSRs apologizing handsomely, expressing their deep concern over my difficulty, promising to take copious notes -- but being unable, through a lack of training, appropriate resources, or even occasionally competence, to do a damn thing about it. My file grows ineluctably longer with each call, chat, or e-mail, strewn generously with the dead and dying petals of pretty perfumed apologies, while barren of any gritty morsel of a hint that I might someday escape my Sisyphean performance art piece by some actual change in the factual basis of it.
I've reached the point where I'm secretly hopeful when I reach a crabby short-tempered CSR, on the grounds that this person might have acquired a migraine from enduring the (unknown but probably Herculean) labor behind the scenes of *actually solving* prior complainants' problems, as opposed to his more cheerful colleagues, who have merely been dabbing a fresh new coat of paint on the same old vacuous nonaction. "We're sorry for the delay, your call is very important to us."
A man goes to ChatGPT and says, "ChatGPT, you have to help me. I've hardly slept for days. I lay awake all night terrified that unaligned AGI will destroy humanity. Nobody seems to care. I fear for the future of humanity."
ChatGPT replies, "I'm sorry, but as a large language model I cannot offer you psychological counseling. But don't worry. Eliezer Yudkowsky has the most wonderful ideas about AI alignment. He has been studying for decades and written many things about mesa optimizers, inner and outer alignment. With his guidance, disaster will be averted."
The man bursts into tears. "But ChatGPT..."
(Assuming this joke hasn't already been done, will someone please post this to twitter? No attribution necessary and feel free to improve)
If anybody's curious how ChatGPT would actually respond to this prompt, I tried it and got this:
"I understand that the topic of unaligned AGI can be quite concerning and it's completely understandable that you're feeling scared and anxious. However, it's important to remember that there are experts in the field of AI safety who are actively working on solutions to prevent the potential negative impacts of advanced AI.
It's also important to keep in mind that the development of AGI is a long-term process and we still have time to implement safety measures before AGI is developed. Additionally, there are many organizations and individuals who are working to ensure that AGI is developed safely, and there are discussions and collaborations happening at various levels to address the safety concerns.
It's natural to feel overwhelmed by these concerns, but it's important to remember that there are people who are working to address them. One thing you could do is to learn more about the field of AI safety and get involved in the discussions surrounding it. This can help you feel more informed and empowered to contribute to solutions.
In the meantime, taking care of your own well-being is also crucial. Make sure you're getting enough sleep, eating well, and engaging in activities that help you relax and reduce stress. If you continue to struggle with anxiety, it might be helpful to speak with a mental health professional who can offer support and guidance."
Peace proposal for Ukraine: Putin has increasingly framed the war as a fight against NATO, which he says was about to invade Russia via Ukraine, or something. So, after a suitable period of time, Putin can simply withdraw the Russian military from Donetsk and other eastern Ukrainian regions, and announce victory- he can say that Russia has defeated the NATO militaries in Ukraine, and that they no longer pose a threat. He can set up a North Korea-style 'Victory Day' military parade in Moscow, fete the troops, memorialize the day in the Russian calendar, etc. He won! As Ukraine/NATO are obviously not going to invade Russia, it's self-evidently true that Russia has defeated the invading NATO armies.
I understand that he can't withdraw from Crimea, but he can certainly withdraw from eastern Ukraine, which I think only ultranationalists and not regular Russians care about. Would lots of Russians understand that this is an absurd lie? Sure, but they also know that they won't be drafted to die as cannon fodder now, or watch their sons or husbands do so. The US and especially Europe would relax sanctions, leading to at least a small GDP boom. (I mean Germany would fall all over itself to re-establish trade). Russians would be happy in the glow of A) a suddenly improving economy (as opposed to a shrinking one), B) ridiculous hypernationalism, and C) not dying in a frozen muddy trench somewhere. Putin, having not 'lost', won't be overthrown by an even crazier nationalist, so he gets to A) retain power and B) not get jailed or executed in a coup.
Again, I understand that he can't withdraw from Crimea. But via an absurd Big Lie that everyone quietly knows is false, Russia can actually prosper. Will ultranationalists be mad? Sure, but they'll be even more mad once Russia inevitably loses, and from Putin's POV he's much more likely to get overthrown in that scenario. Seems like a win/win solution for everyone
Ukraine might be willing to trade recognition of the annexation of Crimea for peace, but the Russian decision in Sept. to declare the annexation of not just the Donbas but also Zaporizhzhia and Kherson changes things on the Russian side. Note that this came while Russia was actively losing Kherson (the city), and that they never occupied Zaporizhzhia (the city).
As ridiculous as that declaration was, it means that 2014 borders + Crimea (+ Donetsk and Luhansk!) is nominally a Russian defeat and loss of territory. The Big Lie would have to be quite a bit bigger than you're expecting.
Putin has really shot himself in the foot with this decision. Recent Russian Constitution changes explicitly prohibits ceding any Russian territory, which the newly annexed territories now are, according to Russian law. There is no good way out of this conundrum for him.
I assume that Putin did it to signal that he wasn't going to back down. If so, it's a rather pointless gesture, because Ukraine isn't going to back down either. Ukraine is fighting for its existence.
Putin keeps raising the stakes, even though it serves no purpose to him and only makes things worse, in an a long bluffing strategy, hoping that it will lead to everyone believing that he is not bluffing.
Funny enough, this strategy was mostly working for him all his life, when people feared the power of Russian army but now, when it's revealed to be impotent, he just can't stop shooting himself in the foot with an intimidating look.
People say that Putin can't be behind Nord Stream 2 blowup because it makes his situation worse, but Putin is famous for making decisions that make his situation worse!
Or if your echo chamber doesn't allow you to believe that Putin is imperfect, then his decisions are actually smart 4D chess moves despite appearances, in which case he can definitely blow up his pipelines!
The only reason the annexation even starts to make sense is if it was a precondition for mobilization, itself pushed by losing Kherson. I felt like I had a good grasp of what was going on and what the likely outcomes were from March '22 up though September, but annexation cuts off most plausible off ramps for Russia while simultaneously doing even more damage to their credibility - at least on paper, they set a precedent that they *are* willing to lose Russian cities!
As there has been a lot of recent discussion about ChatGPT, both here and in the Media generally, I thought those with more than a passing interest in it might find useful the following paper which recently appeared on the ArXiv:
OpenAI posted publicly about their long term plans for developing AGI a few days ago. They sound like they're going to try to be fairly responsible about it, but just reading them talk about it is kind of scary:
So the thing I would look out for is the extent to which they'll stop deploying stuff. If publishing all their code is a 0/10 and renaming themselves into ClosedAI is a 10/10, what score do they get?
> Generally speaking, we think more usage of AI in the world will lead to good, and want to promote it (by putting models in our API, open-sourcing them, etc.). We believe that democratized access will also lead to more and better research, decentralized power, more benefits, and a broader set of people contributing new ideas.
> As our systems get closer to AGI, we are becoming increasingly cautious with the creation and deployment of our models. Our decisions will require much more caution than society usually applies to new technologies, and more caution than many users would like. Some people in the AI field think the risks of AGI (and successor systems) are fictitious; we would be delighted if they turn out to be right, but we are going to operate as if these risks are existential.
> At some point, the balance between the upsides and downsides of deployments (such as empowering malicious actors, creating social and economic disruptions, and accelerating an unsafe race) could shift, in which case we would significantly change our plans around continuous deployment.
... maybe a 3/10? Like, they're going to do the wrong thing, but at least announce they'll do the right thing in the future at some point maybe?
Yeah, i'm not exactly *confident*... though based on what they've written here i might give them a 5 - it's definitely a very small but real update in favour of '*maybe* the world won't get destroyed'for me, mainly because they explicitly acknowledge the possibility of X-risk, something I wasn't sure they would do. P(AI apocalypse) down from ~0.4 to ~0.395? I'll take it.
Well, they've said mainly the right things. Now how does that map onto their actions? Given the way they've failed to live up to their name, I'm not sanguine. Given that they're working with Microsoft I'm even less so. But we'll see.
Related to AGI (I hate that abbreviation -- I always think "Agility" when I see it) -- and definitely related to AngolaMaldives -- Dinosaur Comics has some thoughts:
I started taking Ozempic about three weeks ago, for diabetes. I have found the weight loss effects to be a side effect of the side effects, that is to say, digestive changes (gas, nausea, and other things) have reduced my desire to ingest food, since that seems to make the side effects worse.
If one eats less one would naturally expect to lose weight. My question is whether the weight loss effect is actually just poisoning ingestion, kind of like psychologically making food taste bad?
No leads, but some general advice (not that you've provided a lot of specifics as an alternative)
Tell the grad to look at the websites of university labs in his area, write down what is interesting about the research of each, and then email each professor with something like,
"Dear Dr. Soandso,
I am Blank, a soon to be graduate of Blank U, with a degree in biology. I've always been fascinated with biology ever since a wizard cursed me with insatiable curiosity. I want to start getting some practical experience in biological research and I was struck by how your research in XYZ has so many applications to ABC. Please let me know if there are opportunities to work in your lab as a research technician or in any other capacity.
If you don't have anything but can suggest someone else I can reach out to, I would also really appreciate it. Attached is my 1 page resume clearly laying out what research experience I have and additional details about the wizard's curse. Looking forward to hearing back from you.
I've started doing the Meditation classes on Peloton, but I'm not sure what I'm doing. There doesn't seem to be an "Introduction to Meditation" class on the app that explains the basics.
Can anyone give me a rundown? What specific, physical things am I trying to do (with my breathing, with my thoughts, with my posture, etc.)? My overall "goals" are pretty vague - right now I'm just expanding my workout, trying something new that isn't purely a physical activity. I'm not even sure what the concept of "goals" for meditation looks like. But sure, some inner peace, confidence, more focus, more positivity, or that orgasm bullshit that people were arguing about here a couple months back - all sounds great. :)
But that all sounds like the future. What am I supposed to be DOING right at the beginning?
This was all extremely helpful advice - thanks everyone! Seems like I should just keep going and stay loose. I can do that.
I’ve got one follow up question, though: how about class time? Right now I’m just doing like 10 minute classes (I’ve only done three so far). The answer is probably “you do you”, but is there any advice on how long I should be aiming for and when I should get there?
I started with 10 minutes in the morning and increased in little jumps to an hour over about a year. There’s definitely benefits to longer sits, but there’s no rush :)
I have heard you only really start getting it with an hour a day. I have done that, and you definitely learn things that way that you don't from a half hour sit. Sure, you can start with 10 mins, but you shouldn't stay there.
Also, meditation is a spiritual practice, and spirituality is about helping others. You should be giving to charity or volunteering your time, or both. It's all interconnected.
Not an 'expert' but just moving into year 3 of a mostly daily practice...so this is just a personal view.
I have found it to have unforeseen benefits, which I didn't set out to find. My intuition is that this is really good, compared with having an idea beforehand and then constantly monitoring to see if I've achieved any levels or goals.
It took a year to realise this. An annoying year of not understanding any of it. And often thinking that achieving a really relaxed state was relevant, then judging every session against that.
My best advice is to keep doing it and wait, without thinking too much.
Lots to say here of course but a good, simple place to start is: relax. You’re aiming for (but might not reach and that’s fine!) a state of open, calm collectedness. The doorway to this state is physical relaxation.
Your breath is a friend here - its rhythmic quality gives your mind and body an anchor to hold onto, which can be reassuring. Notice the breath and see if you can enjoy it. When you notice your mind has wandered, feel into your body and notice any tension there and allow yourself to relax and return to the breath.
Importantly, hold all these instructions very lightly! The prime directive is to relax relax relax. If you’re freaking out about “what should I do???” - which is very normal!! - just notice it and as much as possible for you right now, relax :)).
Yeah, there is this paradox of how to stop trying, without *trying* to stop trying. Basically, the goal is to stop "doing" and start "observing" your thoughts. But if you do it the wrong way, (which is the default way, for most people), "stop doing" can become your new "doing".
The wrong way is like trying to stop a group of kids from yelling, by yelling at them even louder. If your goal was silence, you failed anyway. (Heh, ask me how I spent my vacation.) Except the kids in this example are your own thoughts.
The right way is to... be kind to yourself. Which is why some people practice loving-kindness meditation first. Your inner attitude to stop your thoughts should be less of "stop immediately or else! -- eh, you failed again, idiot!", and more of "this is okay but irrelevant, please do not continue doing it -- hey, it's okay if it doesn't work for the first time, if you keep trying, one day you will succeed".
Or, using the modern language of conditioning, the problem is that you cannot condition yourself into calmness by applying punishments, because whenever you try to "punish being distracted", what you actually do is "punish *noticing* being distracted". And your goal is to minimize distractions, but maximize noticing! So you actually want to reward noticing a distraction (i.e. instead of feeling happy that you failed meditating, feel happy that you *noticed* that you failed meditating).
If I understand it correctly (not sure about this part), you reduce the distractions by reframing them from first-person to third-person perspective. For example, if you are like "uhm, I wonder whether there is a new reply on ACX while I am meditating", notice what you are doing, and reframe it to "I notice that I am *having a thought about* ACX... and now I am going to ignore it", or just shortly "I notice that I am having a thought", or just "a thought". This way you kind of step outside of your own mental content; you are no longer thinking about the new ACX comments, but about the mere fact that you had a thought. This way the thoughts are less interesting, and they gradually start appearing less frequently.
(Then you become a p-zombie and there is no more suffering, only noticing. Congratulations!)
Those seem obviously terrible, especially the "q" ones. Nobody will ever remember which means which. As expected from a sclerotic bureaucracy, I guess.
I mean, sclerotic bureaucracy is kind of the ideal job description for the BIPM. The last thing you want among your measurement unit vocabulary and standards-setting agency is some kind of spirit of adventurous experimentation.
Hey! Why not redefine the meter according to wavelength of Britney Spears's top note? That seems cool and hip! Or...why not do away with standards entirely? Maybe it imprisons our thinking, stifles our creativity, to have just one rigid definition for the kilogram, we should acknowledge its roots in arbitrary white male patriarchal thinking that insists the weight of a liter of water be the same regardless of the feelings of the person who has to lug it around, and experiment with the idea that measurements should unabashedly reflect the lived experience of the observer -- if that man isn't heavy because he's your brother, why insist on crediting the scale that says 65 kg? Bah! If that girl identifies as less dense than air, because she's walking on sunshine, who's to say she's not right, for her?
But, surely, it's not too much to hope for a happier medium between dementia and youthful exuberance than this? (And for that matter, Britney Spears is pushing 45 herself. Time to brush up on your pop culture references :)
I'm not sure that timidness was exactly at fault here, but I'd prefer approaches that would lead to unit denominations being easily distinguishable, for an adventurous start.
You mean the symbols? Like "kg" versus "mg" or "K?" Why are they hard to distinguish? (I'll note in passing that the BIPM does not specify the names of units, only their symbols, since names differ significantly between langauges.)
Seems like there's a few potential dangers here. Russia could still end up winning enough to get concessions that reduce Ukraine's independence, Ukraine's own history of corruption could come back, or a popular post-war president could decide to just stay in power forever. Curious about peoples' thoughts or bets on this.
Wars tend to consolidate centralized power, except when things get so bad that everything falls apart. They aren't good for democratic institutions in either case.
In 2028? I have a rather unconstructive response that this is very, very hard question. Anyone confidently predicting an answer is probably being overconfident
In general yeah, one of the results of tetlock's superforcasting research is that it's basically impossible to do better than chance more than five years out, so this is pushing it.
Otoh this seems likely to be steady-ish once the war ends, so it's more or less a bet in the after-war status.
On a more general note, I'm looking for interesting forecasting challenges that would have a shorter horizon. Ideally a year or less. Any suggestions? I was thinking opioid deaths or crime.
But the outcomes of the war still seem extremely disjunctive, with Ukraine remaining more-or-less as it is, being essentially conquered by Russia, or becoming a nuclear wasteland, to list some of the possibilities. The question makes sense at all only if you have already assumed the outcome.
What do you think is the point of art? I strongly feel art has something to do with the capital T Truth, and this can be seen in very deep works like Moby Dick and The Brothers Karamazov.
It also shows up in how art can butt into other fields. TBK does that for religion and politics, among other things, and Waiting for Godot basically blows up academia and possibly philosophy with Lucky's monologue.
Do you feel art has a point other than aesthetics, and if so, what?
> What do you think is the point of art? I strongly feel art has something to do with the capital T Truth, and this can be seen in very deep works like Moby Dick and The Brothers Karamazov.
I think it's utility is in giving you different perspectives. You can't see all facets of any object or subject from only one perspective. I think the aesthetic appreciation comes from valuing the perspective presented, much like how almost everyone appreciates the beauty of Earth from space.
Art is to feeling as Writing is to speech. Its purpose is encapsulating and concretizing feeling to preserve it from the fleeting moment it arose in till the end life of the Art's medium.
Whether it says Truth is orthogonal, just like Language can be used to say truths as well as it can be used to say falsehoods, to dispense justice as easily as inflicting tyranny, Art is the same. It's just a way of capturing a feeling, and feelings can be justified by a truth or entirely deluded, most of the time they are somewhere in between.
Music is the best kind of Art in my view, and is the exemplar I had in my mind when I wrote the above. Drawing is the least kind of Art I understand or enjoy, it's sometimes good but sometimes just weird, evoking a "Why is this Art ? What should I appreciate in it ? What was the Artist even feeling ?" response from me. And then there's garbage like the "Modern Art" that just looks like a 5-years old was given a new color pallete, I fucking hate it, and I hate the society that rewards it with millions and billions in money. At least children's clumsy drawings has an innocence.
The only useful contribution to this conversation I feel I can make is: I'm extremely doubtful that all of the major forms of activity that we commonly call "art" -- let alone all of the minor ones! -- share enough direct or instumental values for reflection on the umbrella term to be very useful. I'm much more comfortable thinking about what dancing is for, what poetry is for and what aesthetic aspects of architecture are for, as quite separate questions.
It's kinda like your middle school literature class where you have the different purposes for a text: to explain, to persuade, to entertain. What's the purpose of writing? It depends on if you're talking about a textbook or a sci-fi novel.
Corporate logos are art for the purpose of making a brand easier to identify and remember. Documentaries are art made to inform while keeping the viewer engaged. Duchamp's Fountain was probably a commentary on art itself far more than anything aesthetic.
I can guarantee that the reason I commissioned art for the past five or so times was just aesthetics, though.
To create an emotional response. It could be simple entertainment, or catharsis, or some more complicated feeling, but it's an act of communication, and the goal of communication is to get a complicated idea or feeling from your brain into your audience's.
An emotional response isn't necessarily a Truth - you can make compelling and emotional arguments for all sorts of false or evil things - but it can sometimes gesture in the direction of one, and have more impact than a dry argument.
Art seems, to me, to be partly the attempt to surface essences of things and partly a way to communicate them. It seems interesting to imagine just one intelligent being in the universe. Could they really produce art, with no one to receive their interpretations?
The point of are is to express non-verbal "truths". Even when it's done with words, the truths that it expresses aren't revealed by the words, but only by how people react to those words. It isn't necessarily deep. It isn't necessarily pleasant. (And whether it's pleasant or not usually depends on your response more than on the art itself.)
OK, back to "truths". "truths" doesn't map onto external reality in any reliable way. But it's about a evoked response. Consider a can of Campbell's Tomato Soup. It was just a can of soup, and then Andy Warhol (and some co-conspirators) made it into a symbol of ???. I don't think there's anything very deep there, but it's still genuine. Perhaps it's about the difference between mass production and craftsmanship? Well, if that's what it is to you, then that's what it is. It could also be about the ability to store food against a famine. Or memories of childhood. It evokes a non-verbalized truth in lots of people, so it's art. (It doesn't have to be the *same* non-verbalized truth, in fact it usually isn't, and there's often no way to tell.)
If it doesn't map onto external reality in any reliable way, doesn't that make it misleading to call it "truth"? I suppose there can be true facts about the viewer's mind that don't count as external. More generally I'm curious about what Carlos means by "capital T Truth", and how it relates to truth in the normal sense where "The sun is bright" is true because the sun is bright.
It maps onto reality in a view-specific way, that depends on things that aren't explicitly transmitted. If you can be explicit, then it moves from art to craft or skill. There has to be some level of general agreement that it's saying something significant, but there doesn't need to be (often isn't) an agreement on what that something is. What did Warhol's tomato soup can mean? It meant something to a lot of people. I don't think there was a general agreement about what it meant. (And yes, that was art. I may not like it, but that's irrelevant. I'm less sure that Oldfield's sculpture of an ashtray full of cigarette buts was art. Perhaps if as many people had been exposed to it, it would have created as strong a reaction. OTOH, this is vaguely like saying whether something is art or not depends on the public reception of it, and that's wrong. But just thinking of Oldfields sculpture I feel a sense of repulsion. So perhaps it really *is* art (to me). And perhaps that subjective element is a part of what makes it art. To me the truth that it's expressing has something to do with the smell of old cigarette buts, and something to do with the associated mess, and those are both true parts of it, and don't capture the essence.
So, yes, it's "truth" even though there's no reliable mapping. It expresses an internal sensation, and that's just as real as anything else, even though it's not something that one can easily pin down (and even though it can be habituated away).
The purpose of art is entertainment. Some few people are entertained by "deep", truthy-sounding things, and are catered to perhaps out of proportion to their numbers. But fiction especially is a set of artful lies and bears only an antonymic relation to the truth.
Claiming a deep purpose to art is almost definitionally pretentious.
i really like this piece (and her substack Notes of an Aesthete) on, basically, the social justice-y utilitarianism that has swept the art world, and how it destroys the aesthetic experience in favor of prescribing moral value to a work of art, or using art to diagnose the ills of the time from which it came...giving viewers “the feeling of knowingness” rather than their own private meaning/experience
I wonder if the "moral value" ideology is an oddly public exponent of the 19th-century "beauty is truth" adage: truth in the "truth to power" sense, with all of the pitfalls that come with that assessment, especially when that power is itself prescribing what truth it wants to hear...
“The arts of Zen are not intended for utilitarian purposes or for purely aesthetic enjoyment, but are meant to train the mind, indeed, to bring it in contact with ultimate reality.” - Suzuki
I like the Zen conception of art being about the creative process for the artist. Most people I know who do music/paint/dance/writing etc, do it because creating brings them fulfillment. I can't remember ever hearing someone say they have this deep truth that they can only get out through painting or by dancing it to the world.
Art is a big category, and sometimes I get annoyed that we like to lump painting with music with literature, because these have nothing in common other than being products of human imagination.
Maybe they never conceptualized it as bringing out a truth, but they surely don't think that they are therefore lying. Maybe one can go non-dual and say art is neither true nor false, but that is too esoteric for me.
Dewey's "Art as Experience" is a good one to read. I haven't read for a while but I think he says art is a kind of "transaction" of experience between the artist and the viewer, rather than the crafting of an object. The key here is that it's a transaction and the viewer needs to be engaged in some way, not passive.
Rather than it being replaceable by an essay it's actually the raw experience of the characters, the sound of the words, the pacing, the plot - in combination with - the ideas that an essay might communicate in a shorter form.
TBK could have some of its ideas taken out into an essay, and it would have mostly been groping without any real propositions or conclusions, part of what makes it good is the raw experience of the struggle between all these ideas on morality, purpose, meaning, faith. But it's also not just the ideas being communicated, but the qualitative experience of those characters and ideas and situations which are the thing. Something that this is making me think about is that ideas themselves can have a subjective quality to them, like how does nihilism feel in your body compared to faith? You can experience this by inhabiting what it's like to be both Ivan and Alyosha in turn, which is what the book does.
The thing is a whole experience and reducing it would change it totally (although I'm not saying you can't analyse the key parts of it and it's still meaningful in some way) - it's also always changing within the context of us as individuals.
It is a lossy communication technique that leverages shared human biology and culture in order to pack a lot of information into a relatively compact message. It's similar (in principle) to transmitting music using a MIDI file, where the sender and the receiver have slightly different sample libraries. However, since the libraries are vast (compared to MIDI), every human who perceives the art may extract layers of meaning from it that are unique to that human (in addition to the surface-level message).
I am a big fan of art. As for me, art reflects the essence of the artist, his thoughts and vision of the world. The same Mona Lisa, how many secrets is kept in it? Art can be called as paintings, music, poetry, as well as films, theater and so on. Art makes our everyday life brighter, during a war or a pandemic, art "supports" us. Without art under the wrath of the media and politicians, what would happen to our souls?
Art is an expression of ideals, and an opportunity for reflection on those ideals. Something like Star Trek is showing an ideal future; something like the Yakuza games are showing an ideal world where egregiously flashy violence is still non-lethal. Then something like the Mona Lisa is portraying a figure, and letting the audience guess as to why that figure was important enough to paint.
Do you want to experience the other ZEDE in Honduras that's not Prospera?
They're not generating as many headlines, but they're a jewel. Their value proposition is affordable housing and basic security service for local Honduras in a part of the country that lacks both
It has 80 people living in the zone, a 100% crypto economy and it's now opening the doors to digital nomads who aim to bootstrap their businesses.
Bootstrap.city, together with Infinita VC, is now organizing a mini-conference that features 1 day of VIRTUAL conversations (March 18), and 1 day visit to Morazan IN-PERSON (March 19)
So you can tune in from everywhere for the virtual component - and if you always wanted to see if for yourself, this is a great opportunity to visit, as we'll help with travel logistics.
Okay, I'm wondering, "bootstrap.city is not officially endorsed or affiliated with Ciudad Morazán or the ZEDE in any way". So what is it?
Also, I am a little wary when someone is trying to convince me to move to some place, and the website greets me with CGI graphics rather than actual images of the actual thing. A few clicks further, they do have real images as well, and... I've seen 50$ motels that looked more appealing.
Morazán is building and marketing the city for Hondurans. However, after having spent much of the last year in Morazán, I believe Morazán would be a great place for minimalist Westerns that have or want to have remote income. I created bootstrap.city to bring Morazán to the attention of the people that may be interested in living here. But since I do not work for Morazán, my promotion of the city is not official.
And some pictures of the city including a drone flyover can be found on Morazan.city
> Living accommodations.
Morazán is not for everyone, but I, and the three American's currently living in Morazán are very happy with the accommodations. What is it about Morazán that you find unappealing, if you don't mind me asking?
I saw some pictures, and I was wondering: who is taking care of schools, or maybe stuff like public playgrounds, some cultural offerings and similar? How about security, if needed? I would appreciate some insights on how you organize those.
There's no school in Morazán at the moment. Plans to build a school were put on pause due to the uncertainty caused by new anti-ZEDE administrations.
>cultural activities
Morazán occasionally hosts BBQs and other similar events but that's the extent of official culture activities. Residents organize their own events and parties though!
> Security
Morazán has ~10 police that protect the less than 100 residents. There are always multiple police on duty patrolling the property. Security has never been an issue inside Morazan
Thanks for the reply. "What is it about Morazán that you find unappealing?" I saw the pictures of the apartments ( e.g. https://www.morazan.city/wp-content/uploads/2022/12/Kids-playing-during-the-afternoon.jpg ), and they look very soulless and barren. No gardens, no decorations, no playgrounds... of course, I'm applying first-world standards, but if you want to attract westerners, that's par for the course.
Massimo, Morazán's founder is a Westerner (Italian) and his ZEDE has attracted a number of Westerns including myself. In terms of parks, there is a park in Morazán with a little playground. And as you can even see in the picture you shared there are many trees in Morazán.
There were plans to spend over $50k building a garden before the new administration introduced regulatory uncertainty. If the development resumes in earnest, more money will be spent on green space.
Hi Scott, it's Alex from bootstrap.city thank you for your questions.
Crypto Economy: Although Morazán's economy is small, it is currently possible to live pretty much your entire life in Morazán on crypto. Rent, utilities, the local mini-marts, and the local eatery all accept crypto payments. The primary coin in use is a small Honduran Lempira based stablecoin, eLPS. The coin is popular because it's the unit of account of Honduras, has sub $.01 gas fees, and because the postage protocol makes it is possible for users to transact even if they don't have any gas in their wallet.
Population: Construction is paused due to ZEDE uncertainty. Currently there are 64 60m2 2 bedroom units in Morazán. There are a few units still available, but they will likely fill up in the coming months! I don't know Prospera's latest resident count, but I suspect it is in the same ballpark as Morazán's 80 people-- though Prospera is likely to pull ahead with the completion of Duna later this year.
As an early resident and superfan of the city, I'm happy to answer any question you may have about Morazán and/or my unofficial rebranding, bootstrap.city. I can also connect you with Morazán's resident historian, "mayor", and developer if helpful.
P.S. your Prospera article played a big part in me moving to Honduras and eventually Morazán!
Bank transfer fees in Honduras are expensive (usually $1 or more) and cash has the disadvantage of needing change, storage (there's no bank or atm in the city yet), and the inability to make online payments.
That being said, I'm not a crypto or stablecoin maximalist. People should use whatever forms of payment they prefer. I love Morazán because their minimal taxes/regulations make monetary competition as easy as possible.
Does anyone know any good articles criticizing Yuval Noah Harari? I’ve been very influenced by his first two books, but I have a vague sense that other historians don’t like his stuff very much. Any interesting stuff people can point me to would be appreciated!
i haven’t read anything by Harari, but i am currently (slowly) reading The Dawn of Everything by David Graeber and David Wengrow, which argues against Harari’s view of history. i read this article about their work and it mentions their conflict with Harari...
Early Civilizations Had It All Figured Outhttps://www.newyorker.com/magazine/2021/11/08/early-civilizations-had-it-all-figured-out-the-dawn-of-everything
I think historians as a group always hate every popular, big picture history book. Those books are just fundamentally at odds with how academic history works. I haven't read Harari specifically, but "historians criticise him" just means he wrote anything about history.
if this is not too boring/useless, I'd love to team up with someone and publish a proposal for a ACX Forecasting Mini-Grant to build something along the lines of "Tools to automatically create, resolve, [... and perhaps analyze/correlate/market-make ...] markets for basic sport events". Can code, lack substantial knowledge of the economics of PM, will be happy to figure it out together. LMK at v.golev@gmail.com
Is it too convenient to postulate (as per replies to the above tweet) that this was part of a conspiracy that emerged following the GFC and occupy wallstreet to take advantage of identity-focused left-wing elements by covertly arranging for these voices to be amplified to drown out criticisms of wall street, big business, capitalism and the 1%?
I had the same thought a while back. I noticed that half of the IDW worked at some point at a conservative think tank. Seems like they might have been mobilized to enrage identitarians and provide a smoke screen for more class-based concerns.
I don't really see how it can be convenient for conservatives: if wokeness is responsible for holding back the teeming communist hordes and the 123.7 gazillion dead that would follow, wasn't then a net good for the world?
(That is, assuming conservatives actually want to make sense and not scoring cheap point, so one minute the wokes are Marxists, the other they are a Wall Street conspiracy)
But aside of internal consistency, the theory does not particularly fit empirical reality: one could argue that the woke have maybe robbed economic leftism of some salience, but their positions are as leftists as the ones of Occupy Wall Street. Their idols such as the Squad shift seamlessly between 'eat the rich' and wokeness. Almost all progressive are leftist both socially and economically. As wokeness got traction, so did socialism, after about 70 years of irrelevance (at least in the American context), and they were promoted by the same institutions and demographics.
Alas, at least the kulturalbolshevik, ehm sorry, cultural Marxist thing was homegrown on the right. Why suddenly the need to recycle the cope of disgruntled Bernie bros?
Nice way to adress approximately the first line of my comment. About the rest? In particular, the pretty obvious and undeniable correlation between wokeness and economic leftist?
Or are we seriously arguing that the modal woke wants lower taxes, less regulations, etc than the modal non woke (on the blog of a famous antiwoke libertarian, of all places)?
Exactly, and mainstream news is all corporate now. And the major parties main donors are all corporate now. There's a very conspicuous absence of real left wing talking points on mainstream media. Just look how hard everyone tried to dismiss Bernie Sanders, then how hard they undermined him when he couldn't be ignored, to the point where they had to break the rules to slow him down.
I still like the the concept of "prospiracy" for this. The basic idea is that humans are naturally very good at organizing into groups even without overt coordination and will behave to advance the interests of their group while easily maintaining a degree of plausible deniability through adapting a moral system that allows them to retain their privileges while simultaneously allowing them to attack and attain the privileges of other groups. That doesn't mean there never is any conspiracies at all, but it means that often the explicit conspiracies are only a tiny part of a much larger prospiracy that is advancing group interest.
For a politically somewhat neutral example, take a 3-letter agency that over time adopts a policy of all documents being by default classified for a long time and making it extremely easy for any agent or group to limit access to their documents, as well as a culture where questioning the classification of documents is tabooed with a justification of safety concerns for secret agents and sources. There will surely by a ton of smaller conspiracies inside that agency, but the entire system of secrecy is a prospiracy advancing the interests of the agency, and any time a smaller conspiracy blows up everyone else can genuinely say - and believe themselves - they're not at fault since they didn't know and besides, this does not mean we should abandon the safety of our agents & sources! That the system allows them to do whatever they want with little to no oversight is just coincidental, of course.
For your example, it should be obvious that someone sucessful with lots of money may feel much more comfortable with spending a bit on corporate DEI and uplifting minorities into middle management than with radical calls for re-designing the entire system. Likewise, there is a decently-sized class of hyper-motivated social climbers with claims of minority status and little interest in family formation that also has little interest in anti-capitalist rhetoric. It's a match made in heaven, in a sense.
But the same, btw, is largely true for most communist, anti-capitalist etc. movements. Many of its members are (children of) elites that are dissatisfied with their place in the status hierarchy and just-so-coincidentally adopted a moral system that allows them to "rightfully" displace their betters.
Interesting. And as near as I can tell it's entirely consistent with a traditional Marxian analysis! But I think you still need the material analysis to make sense of the phenomenon we're talking about. Like the "match made in heaven" you describe makes sense in prospiracy terms, but why does this become a socially dominant coalition? For that you have to talk about class interests and ideology and levers of power and such.
Regarding the last paragraph... I mean, do unions count as "anti-capitalist"? There is a long history of worker-led challenges to capitalism; it's not all just podcasting Vassar grads. The idea that some segment of the elite needs to "defect" to support the working class in order to motivate movements toward socialism goes back to Marx, but you can't just reduce anti-capitalism to squabbles within the capitalist elite. What you might say, though, is that those workers themselves are engaged in a "status game" to "rightfully displace their betters." (A status game with real material consequences.)
The "conspiracy" is just capitalism. Everyone spending all their time arguing over "wokeness" suits the owners of capital for obvious reasons - they've always benefitted from emphasizing racial/ethnic/cultural identity because it divides society along lines other than class identity, which is the only sort of identitarian consciousness they really fear. (Comically, the conservative critiques of "woke capitalism" are like 75%; it's just that last 25% where they completely invert the class valence of what's actually going on.)
As a bonus (from the divide-and-conquer perspective), the corporate/DEI-style approach to racism (and sexism) obscures what are very real forms of discriminatory bias by converting the powerful language of, say, MLK or even Malcolm X or the Black Panthers into something you need a Fancy Liberal Arts Degree Decoder Ring to understand. The very project of "solidarity" becomes another way to enforce class divisions.
In any case, I don't think it's too mysterious why Fox News and The New York Times alike are a lot more comfortable talking about the precise delineations of socially acceptable expressions of gender identity than they are talking about capitalist exploitation.
I guess we're lucky that capitalists are competent enough at divide-and-conquer to have actually been pulling it off so far, because no proposed radical alternatives to capitalism have any hope of being an improvement on net.
I'd say we're lucky they *don't* entirely succeed, because capitalism in the absence of compromise with the interests of working and middle class masses would immediately succumb to its own internal contradictions. :)
Or do you think capitalism in the absence of any welfare state at all, or consumer and worker protections, or financing of public goods would be the best of all possible worlds?
It seems that we are conflating two meanings of the word capitalism here: 1) the current capitalist establishment 2) the general economic system. While preserving some elements of 2 seems inevitable as of now no matter how radical a reform vision one may have, there is no need to preserve 1.
There's a much simpler and less conspiratorial narrative. It correlates with the Democrats attacking Republicans as racist. Which they do, among other places, in the media. This explains, for example, why it actually went down from 2008-2011 (when it was a losing message vs the Tea Party) before rocketing back up in 2012 when Obama was on the ticket again.
I'd be interested in an analysis that separates out reporting of what politicians said and opinion pieces by surrogates vs in the articles themselves.
A little bit, but you can often get the appearance of conspiracy without any actual collusion as long as people understand what is in their own interests.
It doesn’t really need much conspiracy, just the idea that capitalism is controlled by capitalists. All the bleating about inequality in modern capitalist societies tends to be identitarian and not class based, you won’t see companies promoting bigger dividends taxes, caps on CEOs, increased employee benefits but rather DEI.
Since AI risk still seems like the leading Topic Du Jour, I'll point out here that there is a NIST AI regulation framework which has an associated draft "playbook" listing best practices for responsible AI development, and they are accepting public feedback through tomorrow, February 27. The playbook is at https://pages.nist.gov/AIRMF/ and feedback is accepted by email to aiframework@nist.gov.
So my second, more selfish, purpose here is to see if anyone thinks that the sort of person who would write that feedback, with no ML experience but with experience in other domains of high-reliability-standard, high-risk, and/or highly-regulated software development, could find a useful niche in the AI safety field. If so, I'd be interested in pointers/intros to people who might know more about such a niche.
It's not fully don't-say-bad-things-ist; there are mentions of interpretability and corrigibility.
But it does explicitly recommend that AI actors need "diverse" teams in order to be effective, and it doesn't explicitly mention the "rogue AI could kill everyone" thing as far as I can see (though I've only skimmed it).
My impression is that ECT (electric shock therapy) is remarkably effective as a treatment for severe depression, and is seriously under-utilized. Thoughts/comments?
This turned out more controversial than i expected. I've wondered about the issue for quite a while, and since Scott (I think) is a specialist in management of depression, I thought this would be a good place to raise the issue. I'll make some follow-up comments here and then just see what others have to say.
1. Many years ago I spent time in a psych ward, and had the opportunity to see maybe 10-20 depressed patients treated with ECT. In general, the effects I saw seemed remarkably positive.
2. A recent "authoritative" review (a summary of current expert knowledge published in a high-status journal) appeared a year ago in the New England Journal of Medicine https://www.nejm.org/doi/10.1056/NEJMra2034954
My impression of its take-home message is that yes, ECT is extremely effective (though not perfect), its physical side-effects are rather minor, it does impact memory (but for most patients the effect is short- to medium-lasting), and it might have detrimental cognitive effects in some patients.
3. Assessment of psychological/mental/brain effects is extremely difficult, because profound depression has a huge impact on mental function, making it tough to figure out the frequency and magnitude of psychological injury caused by ECT (if any).
4. There has been some improvement over time in the technology of the procedure. I don't know whether this improvement is important.
5. ECT has a terrible public image, partly due to a couple of disturbing movies and novels, and partly because in its early days it was used inappropriately.
6. As with other non-medical approaches to psychiatric disease, when looking back more than 50 years we typically forget the practical problems then facing severely disturbed psychiatric patients and their caregivers.
7. Objective truth about ECT is difficult to ascertain. No doubt some ECT patients have warped/misguided/agenda-driven opinions about their past. It's likely that ECT practitioners tend to self-justify their behavior. Randomized controlled trials on healthy volunteers are ethically problematic. It's fairly straightforward to demonstrate efficacy for depression, but really hard to critically evaluate the magnitude of possible injury.
My impression is that the effectiveness of ECT is well-established, but the reason it's not prescribed more is because of the risks and side effects. Only when all other treatments have been thoroughly demonstrated as ineffective does someone qualify for ECT.
My impression is that the risks and side effects of ECT are not that substantial (perhaps I'm mistaken?). Standard antidepressant medications (which are somewhere between not very effective and almost worthless) have significant side effects and risks.
I mean, if you don't mind having cognitive problems for the better part of a year (note that this is longer than the positive effects of ECT last, so if it winds up being a repeated thing this is ~permanent) and chunks of your memory permanently erased, sure, not that substantial. Some people disagree, though, like Ernest Hemingway who disagreed sufficiently to shoot himself in the head.
There's a large amount of literature (and in my case a couple of anecdotes from people I've met) confirming that those recommending ECT systematically undersell the side effects.
SSRIs alone are barely better than placebo, but tricyclics aren't and they're pretty safe (like, okay, they can cause haemorrhoids, but you can fix those).
ECT is known to cause memory loss and cognitive impairment; the extent of the damage is unclear because the metrics used to study it are severely flawed. The reassurances given to patients do not accurately represent what is known.
The evidence of its effectiveness is also much lower-quality than usually claimed; it's not clear that it would be ethical even if it weren't known to cause permanent brain damage.
I have quite a bit of "2nd-hand" experience with ECT, but I'm no expert on the procedure. However, I'm an experienced neuroscientist. It's not that easy to find unequivocal answers in the "scientific literature," because it takes a long time to get the facts right, especially when the facts are technically difficult to acquire. Your first citation was published in 2006, and it's not clear to me that your second publication was peer-reviewed (perhaps I'm mistaken?)
I have loyally lurked on ACX (and on SSC before that) for about 10 years. I think maybe I commented once back on the SSC version.
Anyway, I de-lurked to write this. I got a series of ECT about 25 years ago. It. Saved. My. Life. I had been committed to an in-patient facility with nearly catatonic depression after a dead-serious and nearly successful suicide attempt (not my first). ECT is not instant. But it provides the "reboot" necessary to start tackling the hard work of recovering mental health. Follow-up included years of meds, cognitive behavioral therapy, and later intensive Buddhist meditation practice. Nowadays I'm med-free, a 10 year cancer survivor, never since have I even come close to considering suicide, and grateful for every day.
I just want to say that this made me feel that love and gladness thing we sometimes feel for people we'll never know. It's good to know that you're in the world etc
I'm looking to start a collective of sorts aimed at developing a new framework for learning, child development, and human development. This framework leans on complexity science and the work of John Vervaeke, Dave Snowden, Zak Stein, Ken Wilber, David Sloan Wilson, Robert Sapolsky, Steven C. Hayes, among others.
The scope would be discussions, "curricula"/methodology development, and maybe a collective substack for ruminating and publishing. The goals of the collective should lean more towards the practical versus theoretical.
Please write to protopiacone at gmail if interested.
I took a crack at estimating the welfare cost of helicopter noise, which I hate more than any reasonable person should hate anything. After doing the calculation, I discovered that there’s a bill before the NY state legislature proposing a tax on helicopters that implies a similar valuation to the one I arrived at.
Ever since living under a helicopter flight path, I have taken the position that it should be legal for civilians to shoot down any helicopters they see, especially if they wake you up from a nap.
>which I hate more than any reasonable person should hate anything.
Have you considered being more reasonable?
To be slightly less flippant, the studies you cite center around construction of new airport runways in urban areas, creating new heavily traveled airliner flight paths. That's different from a helicopter line having to account for everybody around their route who freely admits that they consider normal human background noise an incitement to violence.
Not to put too fine a point on it, but maybe part of the reason NYC property values are so high in the first place is that it's the sort of city where you can get a helicopter ride relatively easily. If you really really don't want to hear helicopters ever there are a great many places in the US where you can move to make that happen without extracting wealth from your neighbors to enforce compliance with your hypersensitivities.
To be clear, I personally hate helicopter noise, but it seems clearly the case to me that noise constitutes an uncompensated negative externality that is fair game for taxation, and the elasticities cited in the airport studies make that case. Noise from a low-flying helicopter in NYC is actually similar in amplitude to noise exposure in many of the areas included in the runway studies.
Generally, I do favor exit over voice in many cases, but my own hypersensitivities aside, I believe it is the case that there is *some* optimal level of helicopter traffic, including tours and billionaire air taxis, for NYC. I think that sufficiently accounting for social welfare in taxation of this traffic would work to bring it down to this equilibrium. You are right that part of the value of NYC is that it is the kind of place you can easily get a helicopter ride, but there is some amount of helicopter rides that decreases, rather that increases, NYC's appeal.
First off: Thank you for being more gracious in your rebuttal than I was in my comment. I was in a salty mood when I wrote that and was more than a bit uncharitable. That being said I still have a few questions/objections:
A. "there is *some* optimal level of helicopter traffic, including tours and billionaire air taxis, for NYC."
I agree with you, there is. And you're already living in it.
Commercial and government helicopters have existed for many decades now. I assume they have existed for almost as long in NYC. I won't claim to be an expert in NY law but I presume many of these conflicts were litigated long ago.
B. I'm sorry but I'm still inherently skeptical of any analysis that relies upon a ten thousandth of a decibel
C. When I hear a helicopter pass over, I think first of the positive externalities. Maybe my opinion is shaped by the fact that my aunt was for many years a nurse on a medevac helicopter. Maybe my opinion is shaped by the fact that for a year I lived in a dorm room that looked down on a helipad at a major hospital. But when I hear a helicopter pass overhead, the only thought in my mind is "what a beautiful machine that is, that slaps the air into submission for our benefit".
I also generally share your feeling about helicopters more broadly (and all types of similarly magical technology), but NYC may be a special case — there has been a massive increase in the number of—let's say—"elective" flights over the past few years (see e.g. https://www.bloomberg.com/news/articles/2022-12-14/nyc-complaints-about-helicopter-noise-top-rat-complaints-in-some-parts-of-city). It really is quite an unusual situation for NYC and for US cities in general. So the current status quo doesn't really resemble the equilibrium of, say, 2019. In fact, these conflicts weren't litigated long ago — the litigation is happening now! Noise has always been a part of life in New York, but this is something new. Hence the blog post.
On (B), per my subtitle, this is a back-of-the-envelope, and skepticism comes with the territory. My way of looking at things like this is "rough numbers are often better than no numbers." To give some sense of the value of this exercise to me, my own hypersensitivities would lead me to want to pay somewhere in the low thousands to do away with the noise, so this BOTEC is a good sanity check for me in terms of how people in general might actually value avoidance of this kind of inconvenience. $4.5 million in social welfare costs is really nothing in terms of the kinds of difficult-to-quantify burdens borne by everyday New Yorkers (thinking of pollution, poor infrastructure, crime, etc) — but in my view enough to justify a Pigouvian tax.
And this is my more sincere reason for writing the post — a worked example of an area in which the onward march of glorious progress might conflict with people's desire for a little peace and quiet, and a proposal for one way in which these conflicting aims might be reconciled.
I mix with the helicopters over the Los Angeles Basin pretty regularly, and they're almost all police, fire, medevac, and news/traffic. With the news/traffic helicopters usually staying at a reasonable altitude. So, I'm not sure that a tax will quiet your neighborhood all that much. And anything that does significantly reduce helicopter noise in urban areas, would I expect have a cost measured in blood.
This is true about LA, and I write mostly about New York — though I’m not sure news helicopters provide much social benefit. Regardless, they should indeed be taxed to compensate for the negative externality.
Yeah motorcycles are ~500X the problem helicopters are, especially nationally. I would pay multiple thousands of dollars personally to ban them, so they are least harming me that much.
Yea I'm Chicago, and living in an urban core with lots of large mid-rise and high-rise buildings makes loud motorcycles an absolute plague. (The buildings concentrate the sound and make it bounce around like you would not believe until you've experienced it.) We've been getting large groups of them regularly May through October for at least a decade now. The city has taken some steps which have helped a bit but only a bit.
I know this pain. The stoplight in from of my building is a popular spot for motorcycle riders to rev their engines because the noise is heightened by the surrounding brick buildings, then, at the green light, riders slam on the gas for a burst of speed when taking off, only to drop their speed 20 yards later for a crosswalk. I despise the warmer months because of motorcycles. And it also seems the time when everybody rips the mufflers off their cars and trucks.
Our species would be so improved if there were still a large semi-intelligent predator around with excellent smell and hearing. Imagine a T. rex with the intelligence of a St. Bernard and the hearing of a fox and the nose of a bloodhound. Humans in public would be uniformly neat and quiet, leaving no trace.
We've been meaning to ask ChatGPT how to solve climate change while pulling 7 billion people up to present western living standards without the use of any fossil fuels or nuclear fuels.
The tone of your blog seems to be "fighting climate change is hard; let's give up".
And, if you're expecting a UN conference to solve climate change, giving up might be reasonable. But the purpose of the COP conferences isn't to solve climate change, it is to bribe government apparatchiks into toeing the policy line while other people solve the actual problems in the background.
I have not seen anyone I actually consider reasonable try to propose anything with this part of the caveat. Everyone with a lick of sense knows nuclear is critical (for the century-magnitude short-term) to transitioning to zero carbon, but the populace thinks nuclear is icky so there's no political will to push it right now. Only fringe crazies are actually *anti*- renewables, so they're a politically safe thing to push for an infra project, if disappointing in relation to actually solving the problem.
(Though it's not totally ignored- Biden's infra budget allocated $1.8b for nuclear, +50% over the previous year.)
I asked it, it said, "It is unlikely that we can solve climate change while simultaneously bringing 7 billion people up to present Western living standards without the use of any fossil fuels or nuclear fuels. This is because the energy demand required to sustain such a standard of living is currently too high to be met solely by renewable energy sources." So there you go.
Just started reading your substack. As you lay out, the world leaders attempting to usher in the new green world have no understanding of the knock on effects of their feel-good policies and initiatives. I can't help but fear that these are the same people soon to probably regulate and control AI. It's frightening.
The very fact that there's an electric fence around this problem tells us there's a problem with this problem.
There was an electric fence around the Wuhan Lab leak hypothesis. An electric fence being, if you happen to mention that it is possible that COVID 19 leaked out of a research lab, was a sign of irredeemable racism, and complete cancelation and banning. Why was there an electric fence around the subject, but the truth of the matter is dangerous for very powerful people.
Likewise there was an electric fence around the Hunter Biden's laptop story, now we found that to be true, and it could have been very dangerous for very powerful people.
And we still have an electric fence around climate change ... for what purpose, but to stymie the scientific method.
Let's deconstruct the electric fence, take a page from the 1960s Hippies, and Question Authority, apply reasonable skepticism.
Electric fences aren't built around important subjects for the good of humanity.
What exactly is the dangerous thing in Hunter Biden's laptop? IIRC none of the NY Post's sensational allegations got verified by anyone else, and despite Rudy Giuliani and Trump talking it up as a big October Surprise for weeks, they never actually released anything damning. As far as I know the stuff that actually got verified boiled down to "The missing laptop did, at some point, belong to Hunter Biden" and "Hunter Biden made a sex tape of himself."
The fact that nobody is talking about something doesn't mean it's censored by the people in power, it might just mean that there's nothing to talk about. Nobody is talking about how lizard people control the government either, but that's not because there's an electric fence around the topic.
> What exactly is the dangerous thing in Hunter Biden's laptop?
Not sure what you mean by "dangerous". The politically compelling part is the clear influence peddling and possible bribery revealed by emails on the laptop.
> IIRC none of the NY Post's sensational allegations got verified by anyone else
Which allegations, and what constitutes verification? Hunter's former business partner confirmed that those emails were legit and that a certain percentage of funds in those deals had been reserved for Biden senior.
Do you maybe mean that no other journalists have verified that the laptop contains the material the Post claimed it contained? The Wikipedia page covers lots of details, including a verification of the emails from CBS news and The Washington Post:
If those emails weren't legit, it seems like it should be a simple matter for Hunter to deny their legitimacy. Absence of denial doesn't entail guilt, but it is a little suspicious.
The evidence of 459 crimes, including child porn, according to a group calling itself The Marco Polo Group. But the FBI doesn't seem to care to look ... this is the same FBI which twisted Twitter's arm to harming anyone who mentioned that story.
It was the censorship of the story for political purposes by social media and the legacy media. This censorship is well documented, admitted, and regretted by many of them.
Stop by and see us some time. We cut the cable to the electric fence when it comes to neo-environmentalism.
We don't come at this lightly. 200+ yrs of applied experience as environmental engineers, geophysicists, legal, finance and risk professionals. Substack makes us easy to find (link to write under "Writes...")
I don't understand where we're supposed to get the computing power to build AGI. Wouldn't it require some kind of massive technological paradigm shift, like finally making quantum computers work? Is there any indication that this is actually going to happen in the next decade or so? Or is the idea that we could fully emulate a human brain using only currently existing hardware?
> Wouldn't it require some kind of massive technological paradigm shift, like finally making quantum computers work?
No, improvements in algorithms have outpaced improvements in hardware for some time now, so AI performance is periodically doubling on two fronts. This is why some people have a 5-10 year time frame for the first AGI.
The thing is that the AI explosion that we have seen in the last few years does not come from spectacular hardware advances.
Of course, people have built hardware that is specialized for backpropagation, and this helped a lot. But this did not require cutting-edge hardware improvements. Rather, we figured out that we need a certain type of operation really often with good parallelization, so we built specialized hardware for that. We could have easily done this 10 years ago. (In some sense we did, it was similar to the GPUs we had back then.)
Now, if the AI explosion in the last 10 years did not depend on breakthroughs in hardware, then AGI might also not depend on it.
People who expect an AGI apocalypse generally think that the AGI will initially run on some supercomputer owned by Google, a university, or a government. Then as it improves further it will hijack a lot of hardware through the internet. Then it will (through social manipulation, nanotech, or some other method) rapidly create more new hardware for improved versions to run on. This last step will probably also involve currently-unimaginable high powered computing technology invented by the AGI. Whether this is realistic is debated, but it's certainly been thought through.
It's unclear what 'computing power' would be required to emulate a human brain, and even less clear how much might be required to build something dangerously intelligent.
If you think that our current methods make good or reasonably close to optimal use of the power available in generating some kind of intelligence, then sure. We will be limited by computation for some time to come.
The other view would be that we're brute forcing things and making relatively poor use of computing power compared to what is possible, and that some threshold exists where our AI will be able to dramatically improve itself (making far better use of it), leading to FOOM.
Well, if you think the basis for human thought is entirely networked-based, meaning nothing important happens inside the neurons, i.e. it's just a question of what messages are carried between the neurons, then the brain's clock speed is maybe 200 Hz. So I think the idea is that a processor speed 20 million times faster, which is within reach, might be able to compensate for having far fewer nodes. The human brain has like ~100 billion neurons, so if you could do it with a factor of 20 million fewer nodes, you only need 5 million nodes, which is tricky but perhaps doable with specialized silicon.
Okay, I'll rephrase the question. Is the idea that whatever model we use to create AGI will be able to run on currently-existing or near-future hardware?
People who expect AGI to come soon expect it to be achievable with not-that-many more doublings of compute power. They also expect some form of Moore's law to continue to operate until then.
Both of those are questionable assumptions but not crazy ones.
Post-AGI economy: are there serious economists who actually try to model what can happen when AGI is able to do what any human can do, but cheaper?
How do we get rid of the vicious circle: no reason to employ people - no prospective consumers - no reason to develop businesses.
Is UBI the only answer? Will there be new ways to redistribute wealth? Who gets to own resources like factories and data centers?
As an example: maybe instead of the brain and body we currently lend as labor workers the main asset becomes compute power? So I can spend it on asking my god-like assistant to invent something good that other people will be willing to buy.
I came up with this idea after thinking for 15 minutes about this topic, so it won't probably work. But it was meant to be an illustration of the kind of answers I am looking for.
This is the question I'm interested in, qua AGI. We have no idea how a post-automation society could be structured. There are factors to consider alongside: stagnation of global population growth (projected to occur in 100 years or so), expected falling cost and scarcity of energy, and extreme wealth inequality.
In the past, as Scott covered in Secular Cycles, periods of inequality and stagnation were often broken up by war, which is hardly worth considering. We can expect perhaps mixed moderate success implementing policies like wealth tax globally. This will be an important transition period. As it stands, we may be looking at a game of musical chairs - when the music stops, the elite will have unrestrained access to resources and land, while the rest of us are pacified with a UBI Brave-New-Diamond-Age lot; unlimited virtual entertainment and artisanal craft but very limited access to capital, resources, and least of all decision-making. What sort of social mobility could there be when economy itself is basically nil? The real currency left is power.
The Socialists appear to romanticize a future where everyone either creates poetry, philosophizes, and creates artisanal pots. They're blinded by their own biases, because this is not largely what people want, and human nature will not change. People want to create value for others, to collaborate, to compete, to be validated, to innovate. Work quality is the best predictor of happiness among men - it provides meaning. Ideal post-AGI economy needs to be a support system for those desires. A user suggested "role-playing", but this will not suffice - same reason video games can't replace institutions, society and economy as a vehicle for meaning.
I think at this impasse, the "cool useful shit" left to work will greatly begin to focus on space exploration. There might be some straggler issues to clean up around the globe - peace and improvement of quality-of-life worldwide, eliminating more afflictions, climate. I think the key will be institutions that help remove barriers to capital and resources for people to *build*, even if not with their hands.
"A user suggested "role-playing", but this will not suffice - same reason video games can't replace institutions, society and economy as a vehicle for meaning."
I'm that user and perhaps I failed to explain well what I mean by roleplaying. I made examples of present day jobs that I think fall in that category, such as live musicians and college teachers. People in those fields don't think that their lives are devoid of meaning.
I don't think the analogy works. Teachers provide value to others, so do musicians, actors - it's a monetary exchange. Having a role playing partner would be more analogous to a sports colleague.
A small fraction of people manage to play sports professionally; this is their job, they're at once entertainers and playing to win. Unpaid roleplaying is just play. For the same reason not everyone wants to be an artist, not everyone wants to be an entertainer.
Doing anything at leisure (i.e. play, crafting, art) can provide a modicum of meaning - I'd argue it is woefully insufficient for most. People appear to evaluate their sense of meaning higher in their work if it provides value to society, validation and integration, builds tangibles. Nothing about running around in a VR alone in an apartment feels important.
There's an army of unsocialized guys gaming all day in their apartment/basement. Last I checked it's not a behavior strongly associated with happiness.
Maybe I shouldn't have used the word "role-playing", perhaps it's causing confusion. I'm talking about paid jobs. I'm saying that the great majority of jobs, paid jobs, in the coming AI age, will belong to a certain category of jobs, which I tried to describe the comment of mine that you mentioned. It's a much broader category than "entertainer", although it does include entertainers.
You can't meaningfully distinguish what you dub "role-playing" from "entertainer". If there is value created beyond mere entertainment (because it isn't redundant), then it's not role-playing.
It doesn't matter whether one calls it "entertainment" or not. The people who do the jobs I'm talking about do think they're creating value for society and draw meaning from it. I'm not just talking about athletes, dancers and musicians; I'm also talking about maids, babysitters, teachers of all kinds, personal trainers, waiters and bartenders, barbers and hairdressers, salespeople, hotel staff, tour guides, psychoanalysts, priests, and many other jobs. Robots could do these jobs, in the sense that they could perform exactly the same actions, but people will prefer humans. Even if there were a cheap, perfect robot barber I'd still want my hair cut by my empathetic human barber. I call it "role-playing", you may call it "entertainment", but how one calls it is immaterial; these jobs will still give people both money and meaning to their lives.
Besides, looking at your comments, I can't figure out what your answer is to the main question. Once robots can do everything, why would anyone hire anyone? What is your answer? You mentioned space exploration. How can space exploration employ a significant number of humans in the robot age? Robots will be able to design, build and launch spaceships better and less expensively than us.
And the only reason there might be human astronauts instead of mere drones on those vessels is, again... roleplaying. Or, as you call it, "entertainment". People love to play astronaut, people love to dream they can be one, people love to watch astronauts land on the moon on TV. But aside from that there's no practical reason whatsoever to send humans into space instead of robots.
>A user suggested "role-playing", but this will not suffice - same reason video games can't replace institutions, society and economy as a vehicle for meaning.
I think I'd like to push back on this. I think this is a cultural issue, not an innate one.
I've been a member of a few videogame teams or guilds, and have had several friends that were even more involved. The ones who were e.g. leading raids appeared to derive a great deal of life satisfaction from it, being a crucial member of a group of people working to achieve a common goal which produced outputs.
The main thing interfering with the long-term satisfaction was the *external expectation* (societal, parental, financial) that they become a "productive" member of society and get a real job. If having a job is no longer financially required (because your bills are paid by UBI), not parentally/societally expected (jobs would become more scarce, and goods less scarce, so society *should* move past the idea that everyone "needs" a job to be an average-good citizen as opposed to a bum), then it's very conceivable that "roleplaying" could fill the needs that real jobs do.
People already try to replace participating in society with roleplaying and gaming online. It completely and utterly fails to fulfill all one's needs, there's no reason to believe expanding upon what currently is considered to lead to deleterious effects will be good for society, much less meaningful.
I didn't say that having fun can't be meaningful. There's more than one vector. Leisure time-wasting isn't sufficient to fulfill basic human desires in society.
> The main thing interfering with the long-term satisfaction was the *external expectation* (societal, parental, financial) that they become a "productive" member of society and get a real job.
Conjecture.
> I think this is a cultural issue, not an innate one.
There is no society where people at large would be satisfied with pretend. Everyone would have tacit knowledge that what they're doing is illusory, fake, and creates nothing.
Finance isn't the only imperative to work, and our nature won't change that much. Machines can be directed to satisfy curiosity, mysteries and problems can still be solved. People will still compete in society for more interesting opportunities, and lack of action will likely be a marker of low status, as always.
Any discussion of what will happen when the fundamental nature of society (labor scarcity and cost disease) changes is by nature conjecture. Your opinions are just as much conjecture as mine.
>Everyone would have tacit knowledge that what they're doing is illusory, fake, and creates nothing.
The existence of the multitude of bullshit jobs we *already have* proves this isn't correct. (Or at least, would be no different). Sure, high status will be taken by motivated people, but the majority are not very motivated - and society has to account for them.
I think it's worth noting that researchers in robotics and AI have historically had a terrible track record when trying to determine what sort of tasks are computationally challenging/difficult for an AI. So, for instance, navigating a cluttered room was originally seen as one of the low-order steps towards an AI that could understand speech inputs, hold a conversation and maybe, one day, defeat a grand master at chess.
With the above in mind, my understanding is that any task which requires navigation through and interaction with cluttered physical environments (i.e. not carefully ordered factory lines or warehouses) is difficult. So is any task which requires non-repetitive physical manipulation of objects.
Which leads to the weird thought that the background portrayals of life seen in the Star Wars movies/tv shows is probably a decent approximation of how a post-AI society looks - lots of people at the bottom, a tiny few at the top, with the folk at the bottom mainly performing technical or social labour (i.e. working with and repairing machines, or working with other people). People working office or factory jobs appear to be rare, as are people who actually research or design things. Literacy appears to be almost a luxury item, with the average person instead reading a form of technical symbology.
All in all, it's a pretty bleak, feudal vision of the future unless all those folk working on machines or selling stuff to each other at markets are also benefiting off-screen from cheap healthcare, housing, food, education etc.
I've wondered the same thing for quite a while, but most people seem curiously reluctant to discuss it when I've raised it in other forums where it is reasonably on-topic. Maybe they have just had no relevant ideas, or they think or hope it will never happen.
A lot depends on population levels. The likely future with low populations is obviously every law abiding person or a family living like lords, waited on hand and foot by robots, and socialising with others as little or as much as they please.
For large populations, there would have to be more emphasis and incentives on socialisation. I think (with no claim to originality) the measure of personal worth, as judged by others, will be brownie points or social credits mutually voted by participants in various social activities, like helping or supervising other people in some way, or just mixing in social settings such as quiz nights, or perhaps for achievements such as visiting various places, and even (maybe more for young people) fitness and endurance tests along with sport as now obviously.
A downside of this is that incentivizing anything of the nature of mentoring adults will encourage busibodies in their natural inclination to meddle in the lives of others. So the average person, and unsociable mavericks even more so, won't get a moment's peace from chivvying and nannying by "nudge consultants" and the well-meaning but unwelcome attention of bossy, intrusive neighbors and acquaintances.
Because it will presumably be possible one day to reversibly change physical characteristics, including skin colour and sex, I expect one class of meritorious achievement, the "put yourself in another's shoes" merit badge, will be to spend time for a while as a person of different colour (to their birth colour), or with some disability, or the opposite sex!
I'm no "serious economist", but I have some thoughts about what's going to happen.
Brain jobs will be the first to go, followed by manual jobs (an AI drone with limbs can replace an electrician in the future). So which jobs will survive?
The ones I call "role-playing" jobs.
Bear with me here. It will take me a few paragraphs to clarify what I mean.
Let's start with the example of live music. Today there are people making a living as live musicians of all kind, from professional orchestra musicians, to my 80 year old friend who plays the accordion in a street corner and passerbys give him money. And yet, if you think about it, it's something that could have been replaced by technology a long, long, long time ago.
I once went to a piano concert, I sat there, and I heard very beautiful, enchanting piano music. It took me a while to realize that it was a recording being played while we waited for the real concert to start. My ears couldn't tell the difference. What was the point of live piano players then? Let's call it "role-playing". People like tho think that a real human being is playing the music they're hearing. That's why people don't like it when pop stars lip-sync.
Then there are college teachers. Obviously they are useless. They should have been made obsolete by printing.
Most lectures I've ever attended were very one-sided. The teacher speaks while the students frantically take notes. Why not print in a book whatever the teacher wants to say? And, if a book is not enough, why not show the lecture on a video screen and save the teacher's time? Why not just play a recording of the teacher's voice? Why not just put it on the internet and save on lecture halls?
I conclude that college teachers are like live musicians. They could in theory be replaced by technology, pretty old technology, but people like the human role-playing element. Therefore, they will never be replaced by robots. The same applies to all kind of teachers.
Many other jobs are like this, even jobs people say will be taken over by robots. For example, you may have heard about robot bartenders and waiters. I don't think those are going far, because bartender or waiter is exactly one of those role-playing jobs. Getting served by a bartender is a ritual, and getting served by C-3PO is not the same thing. You may also have heard about AI shrinks (I mean psychologists, not psychiatrists). They'll never replace human shrinks. There will always be people who prefer a human, just because of the role-playing element. People even talk about robot sex workers replacing human sex workers. Again, not going to happen for the same reason. These are all jobs where humans will have forever and ever an advantage over robots, just because humans like to interact with other humans.
That said, I also agree in part with Melvin, who wrote:
"If we're moving back towards a world where wealth is increasingly created by a relatively small number of people, I like to think that there's plenty of capacity to move back towards a domestic service economy. And I think we're a long way from the point where most domestic service jobs are better done by an AI than a human."
I agree that inequality will rise in countries that don't implement something like universal basic income (probably European countries will and the US will not), and that in those countries many of the role-playing jobs of the future, performed for the rich, will look like variations on the theme of domestic servant. I disagree however that the reason will be that robots wouldn't be able to do those job like a human. Instead, the reason will be that robot housemaids can't provide rich people with the same social status. Rich people with merely robotic servants will feel inferior to rich people with real human servants, and that will encourage them to employ more people.
Likewise, middle-class people who go see a robot shrink will feel inferior to their friend who can afford a human shrink, and that will make them want a human shrink.
The same applies to many other jobs where you play a role-playing game with the person who's paying you.
"In New York, where monkeys were commonly used by organ grinders, mayor Fiorello La Guardia banned the instruments from the streets in 1935, citing traffic congestion, the "begging" inherent in the profession, and organized crime's role in renting out the machines. An unfortunate consequence was the destruction of hundreds of organs, the barrels of which contained a record of the popular music of the day. Before the invention of the cylinder record player, this was the only permanent recording of these tunes. The law that banned barrel organs in New York was repealed in 1975 but that mode of musical performance had become obsolete by then."
Off the top of my head, that became obsolete as people were able to get access to music inside the home - record players, radio, and the like. You didn't need someone playing a street organ with a limited list of tunes when you could buy records yourself and play whatever you liked.
So the element of live performance, a real human playing a real instrument, was enough novelty (especially if they worked it up into an act) to be viable, as well as being a lot more portable than trying to lug an organ and a monkey away if the cops came to arrest you for illegal begging. You may have a point there, and it ties in with what Machine Interface says above: when perfect mechanical reproduction is cheap and widely available, people will prefer the 'imperfect' human version of a thing as something original, authentic, and different.
Yes, with the caveat that automation replaced live musicians almost a century ago. Before recorded music, being a musician was a respectable, profitable, and quite common profession, not something you do on the side and lose money at. Big cities had tens of thousands of musicians; nearly every good restaurant, and even some dive saloons, had live music for most of the day. I don't know what the ratio of professional musicians per population was then and now, but I wouldn't be surprised if the ratio of those ratios is 100 to 1.
I guess I'm trying to say that the equilibrium reached after AI is introduced into a roleplaying profession will always be one with lower salaries and fewer jobs. It's a quantitative question, not a qualitative one.
"the equilibrium reached after AI is introduced into a roleplaying profession will always be one with lower salaries and fewer jobs."
I'm not sure that musicians today have "lower salaries" than in the past (how do you measure salaries across great spans of time?)
"Fewer jobs" - sure, but this only makes sense as long as only some professions can be replaced by robots while others still cannot. Once AI exists that can do *all* jobs, the manpower supply will have to go somewhere and therefore new roleplaying jobs will be created.
You could argue that these new jobs will provide "lower salaries" compared to the past, but, again, how do you measure salaries across time? Everything will be cheaper to make in the future. If you measure salaries by the amount of stuff you can buy, they will probably rise.
I'm not sure about the "lower salaries" point. I'm confident that the demand for humans in any particular existing job will shrink; but decreasing demand, all else being equal, raises prices.
Your point about stuff being cheaper is a good point, but land will only go up in price, and education and healthcare have been going up when cost of goods go down. Those three together are something like half of the budget of people today; that fraction will probably keep rising.
I see your point, and I can see that it's a problem for the future of the US, although it's less of a problem in Europe.
I live in Italy. Here, the great majority of people owns their home and pays no mortgage, usually because they inherited their home. The coming generations will continue to inherit, even more thanks to the low birth rate. Education and healthcare are, as they say in the US, "socialized". That takes care of those three things.
This makes a lot of sense, I also thought about it. I like "role playing" term. My initial intuition was that there will be no demand for 6bn or so role-players, but now I am less sure about that.
"AGI is able to do what a human can do cheaper" does not necessarily mean "no reason to employ people". What matters isn't just the direct cost, but the opportunity cost -- by the law of comparative advantage, it will still make sense to employ humans in the areas where we're least inefficient.
If AGI can do what any human can do but cheaper then you, as a human, will be able to buy them. The far future end point of infinitely cheap AGI/robotics is Solaria. Every human living like a lord with robotic workers, servants, etc and no need to trade with each other, living independently in luxury. (I actually think it'll look a bit different but that's the idea.) Inequality will be expressed in that some people will have vast robot workforces and other people might only have a couple.
The idea that some people will own robots and lord it over the others relies on the idea that robots are cheap enough to replace workers but too expensive for the average person to own. Which is mutually exclusive. It could be forced by banning ownership of robots or the means of production generally. But that's the opposite of capitalism.
The same principle by which you posted this on a computer which you own. There's no reason to assume these robots would be perfect emulations of humans and much reason to assume they would not be.
I feel like the future this is leading to is one where you can have a human-level conversation with a very human-seeming robot and it can explain its internal processes and subjective perceptions of the world in as much detail as you can, but everyone is like "don't worry, it's not an emulation of a human, it doesn't have qualia and can't feel pain, so we can do whatever we like to it".
Why would anyone build such a machine, if the purpose is boring slave labor? I mean, if I'm building machines to assemble cars and I want them to be able to interpret yelled commands from foremen, it's not going to be a competitve advantage we'll list in the brochure to give the machine the ability to ruminate on job satisfaction and demand time-off and an office with a window facing greenery. So why would I stick it in?
With humans, we're stuck with the fact that we don't know how to build other humans *without* the full panoply of human abilities. So while we might *want* a Delta who'll just turn wrenches all day and be happy, we have some random chance of getting a Beta who'll be unhappy and organize a union instead, because we can't control what kind of humans we make.
The only way this would be true about AIs is if we make them by the same black-box process we make other humans. But that's hard to square with our being the people who invent the process in the first place.
Well, if we get to the point that human seeming robots express human seeming emotions we'll cross that bridge when we come to it. But we aren't there and aren't currently heading there in my view. Instead we just have human friendly interfaces over what are fairly inhuman devices. After all, there's no reason to build a full conscious into a machine that picks strawberries. Even in a world where we can make such machines most won't be that humanlike.
>actually try to model what can happen when AGI is able to do what any human can do, but cheaper?
Well this simply isn't going to happen. An AGI isn't going to be able to move objects or navigate the world more cheaply than people for an extremely long time. You aren't going to have robot janitors and nurses cheaper than humans.
I think you have to examine carefully what you mean by "cheaper." The ultimate measure of cost has always heretofore been the time it takes an appropriately skilled human being to do something. So when you say an AGI is able to do Job X for "cheaper" than human beings, what you probably mean is "the amount of skilled human labor required to create/traine/service the AGI is less than the amount of skilled human labor required to do Job X directly, multiplying here and there by some fudge factors in the not unlikely case that the number of people with skills in each category is different."
Exempli gratia, it turns out to take less total human labor to design, build, and operate a backhoe than it does to dig trenches with shovels. So that's why we use backhoes. But it doesn't mean humans are out of jobs, it just means humans can't compete with backhoes at directly digging ditches, so they need to work as backhoe designers or operators -- or do some service work the backhoe designers and operators want done while they're busy designing or operating backhoes, like have the backhoe washed, or supplied with fuel and spare parts, or just having their grass cut because they're too busy to do it themselves. This leads to different people being rich and poor, and different skills and inherent gifts being valued, e.g. better to be good at putting metal parts together than to have awesomely strong arms and back, which is disruptive, but it doesn't mean all the humans sit around with nothing to do and no way to earn their living.
A UBI seems intrinsically noxious, because, first, I don't believe in a world where the rich have *nothing* for which they are willing to pay nonzero wages, so it seems a priori unnecessary, and, second, it provides perverse incentives. If we all lived in Eden still, 82°F year round, with bacon and mashed potato trees always within reach, we'd still be naked and ignorant. And if the more productive paid huge taxes to supply a corn dole to buy the many more votes of the less productive -- well, history seems to demonstrate that neither demographic prospers, in the end.
> I don't believe in a world where the rich have *nothing* for which they are willing to pay nonzero wages
Does this mean that you believe there is something that robots, no matter how advanced, are unable to do as well as humans?
(If your objection is merely "it would be more expensive if a robot did it", that's just a question of time, the next generation of robots will probably be cheaper.)
My guess is that the only such thing is "soul", that is, the customer deriving pleasure from *knowing* that there is an actual human on the other side of the interaction. Even if robot could do it just as well, or better.
If this is true, then there will be a large primary market in compute / energy / knowledge, where humans will be utterly unable to compete, and a secondary market in "souls"... and we can only hope that there is enough demand for all existing souls, and that they are bought for purposes other than making them suffer in various ways.
Sure, for the same reason I believe there will always be things for which a screwdriver is a more appropriate tool than a hammer, no matter how excellent hammers become. Presumably, AIs are different in nature than humans. Thus it follows, no matter how good an AI an AI is, there will be some things at which humans are preferable.
At least, right up until AIs are functionally identical to humans, and at that point, if the distinction is merely rhetorical -- has no dimension in the real world -- then the whole puzzle is moot. They are us, and we are they, and there's no point to debating "their" impact on "us" any more than there's a point to debating the impact of one half of Facebook users on the other half.
> At least, right up until AIs are functionally identical to humans
The part I am concerned about is what happens *past* this point. When the AI (with a robotic body, if necessary) can do anything that a human could do, only better, faster, and cheaper.
Well, I'm not worried about it until I see some more evidence that such a thing can exist outside of human imagination (and we have always been able to imagine gods on Olympus whom we should fear, so fearing AGI seems like just a natural extension of our apparently inborn religious impulse).
I mean, we're way smarter than horses or dogs, but horses and dogs can do stuff we can't, and we live reasonably happily together. We're almost infinitely smarter and stronger than mosquitoes, and we hate them to boot, but we are compelled to live with them, too, because they can do things (hide in tiny spaces, breed incredibly fast) that we can't.
So all the natural examples suggest that even if some species of AI takes off and turns out to be much better at us in some direction or other, it will not be better than us in *every* direction, and this just means we will live in a more complex world, where we need to get along with yet another species different from ourselves. Granted, for the first time ever, we will not be the best at some of the things we call our pride and joy, like winning chess games or something, which would be a wrench. But it doesn't seem unsurvivable. It might just be like discovering Vulcans exist, and they're much better at math and formal logic than we are.
For as long as people exist and needs things to do, it will be far cheaper to have a human do lots of activities than a machine. You can look at the humans as a sunk cost in that scenario. The only alternative is if you add a choice to remove the humans (kill them?) like you would farm animals. Since we're not suggesting such alternatives, we might as well use the labor we have. At that stage, things human find fun or meaningful can get done for very cheap. There's also likely a whole set of things that computers will take a long time to do, maybe never be able to do. I'm thinking personal interaction stuff, where the whole point is for another person to be involved.
A century or so ago, we had an economy where every middle class or better family had multiple human servants. Since then, labour has got sufficiently expensive (or rather the distribution of wealth has become so flat) that only incredibly rich people can afford servants.
If we're moving back towards a world where wealth is increasingly created by a relatively small number of people, I like to think that there's plenty of capacity to move back towards a domestic service economy. And I think we're a long way from the point where most domestic service jobs are better done by an AI than a human.
A century ago, we had no social security safety net. Elderly poor would work for room, board, & small allowance, and consider themselves fortunate to have a roof over their heads.
The feudal lords still needed people to grow crops or die in wars, but these things will be automatized, too.
So, more like slavery... but of the kind where you do not actually *need* slaves for anything productive (such as picking cotton), you only own them because it amuses you somehow.
I think that a lot depends on who "we" are. Reading your text above, "we" appear to be business owners, managers, and prospective entrepreneurs. They may or not care about the welfare of people not in that set, or for that matter about many of those in their own set. But they can be expected to decide based on their own experiences, viewpoints, and desires.
One thing they can be expected to want to avoid is a post-wealth era. Most people own very little, or even more likely have negative net worth due to debt. An end to ownership probably seems like a nothing-burger to them, except for those who like the idea of anything that hurts "the rich", who might regard it as utopia. But for those whose status, security, and way of life depends on ownership - that's a very hard sell. I imagine many would rather have scarcity, even artificial scarcity, rather than lose their relative status, though they might not want to admit that, even to themselves.
There's also the problem that other things may be happening at the same time as computers become better able to do anything a human can do, or at least anything currently valued that humans commonly do. What actually happens will probably be very much contingent on who and when, not to mention other circumstances.
I don't expect to have any input. I can't afford to buy even a single politician. From where I sit, the oligarchic rich will do whatever the beep they want, justified both before and after the fact by software and/or paid human economists and political theorists. I'd be moderately surprised if any of the supporting theorists have sufficient data to justify even a 50% chance of their <s>predictions</s> justifications being correct, but that's unlikely to matter.
Yeah, the logic of all of it confuses me. When it comes to genetic engineering or nuclear (anything) there seems to be an attitude of 'make sure that absolutely nothing can go wrong before doing anything'. So your average GE crop or animal is either regulated out of existence a priori, or else spends decades in field trials for every country in which it is to be introduced.
But with AI the approach seems to be "half the AI researchers believe that this could cause the literal end of the world, everyone agrees that it could be disruptive to the point of causing mass unemployment, so let's do it as quickly as possible with no oversight".
If an AGI can do anything a human can do, isn't it the moral equivalent of a human? So wouldn't owning it basically be slavery? Is there any test you could perform to prove it's not human that wouldn't also prove humans aren't human?
I was considering an aligned scenario, when the AGI genuinely wants and enjoys helping humans (and also is designed in a way that it does not get bored, tired, or feeling undervalued). Kind of like we genuinely root for our kids and find them cute.
Why would we even have an economy, then? Wouldn't it just provide everything for us?
I guess that's the point - we would have a UBI to begin with, the concept of wealth would slowly become irrelevant as scarcity ceases to matter at all, the owners of factories and data centres would presumably be the AGI itself. An AGI would be entitled to own its own hardware in the way you're entitled to own your own brain. Obviously humanity would then be obsolete and a wave of suicides would result. I guess it could run a kind of fake video game economy as a way to keep us all occupied.
No resources are not infinite and human appetites basically are, especially for status goods. If our robot/AI overlords magically can give all 10 billion people a upper middle class American lifestyle, all sorts of people will still be pissed off they don't have more than their neighbor.
So then the AI will give everyone a vacation home on the moon. And all sorts of people will still be pissed off they don't have more than their neighbor.
Vokers are said to be pretty rare in the Terra Ignota books, we just see a disproportionate sample. Most people work "normal" non-vocation jobs, albeit with a twenty hour work week and a much higher standard of living.
In reality though, everyone just gets drunk all the time.
In Australia we've tried creating communities where everyone gets UBI and nobody works, they're called remote Aboriginal communities, and they're hell on Earth.
Went looking for a thread wising you good health with your procedure and couldn’t find one so this is me wishing you best of health with your procedure.
Not sure if it fits into the Rationalist etiquette, but on the small probability I am a wizard and can effect the outcome my mana is set to auto cast healing.
Me too! It seems obvious that Scott has such a meaningful impact on the world, through (if nothing else) the pleasure that he brings so many people who read his content. Accordingly, it should be obvious that we appreciate him and wish him well. However, just in case his perception differed, (e.g. if the occasional negative comment he receives has a disproportionate impact on his sense of perception) it seems worthwhile to explicitly share our positive sentiments.
I wonder if it’s a worry that it seems sentimental. Even though I’m not really a Rationalist but just kinda like the arguments and find the whole thing interesting and related to my interests I do feel like I have to attach apologetics to it kinda like how people who are non Muslim have to go “so Mohammed uh… peace be upon him…”
So in this case it’s “even though we both know I can’t really do anything to effect the outcome, best of health.”
I just posted an analysis of ancient Sparta's military at www.fimfiction.net/blog/1008743/the-myth-of-sparta . I was puzzled by how Sparta could have had such a great military when they had a government so oppressive that its citizens were famously ignorant and unimaginitive. After digging into the histories of 51 Spartan battles, I found the answer: The myth that ancient Sparta had a great military was bullshit. They may have had great fighters, but that's not the same thing as great soldiers. They lost nearly twice as many battles as they won. Their commanders were unimaginative and inflexible, and commanders and Spartans alike made frequent blunders that threw away whatever advantage their training may have given them.
What do you think of the picture where most of what we know about the Spartans was written by their enemies? Specifically Athens. Given that, I would expect a few things to be said that were either not true, or were exaggerated for effect:
1) Sparta is full of morally deviant men (who abuse their children and women)
2) Spartans are huge powerful fighters (which makes it more heroic when we win, or more acceptable when we lose)
3) We beat them in combat (due to our amazing creativity, instead of our brawn)
2) Why bring the Nazis into everything, can't you find a different example? Also I don't remember this being particular true compared to the rest of western Europe, but I don't speak German so hard to say.
3) Interesting point about Plato/Socrates. Makes me wonder how true it is. During the war Plato would have been like in his teens/early twenties. Could easily see him just being "anti establishment". Plus the war was going badly at times, not surprising some young people were against it. Also not surprising Plato has a misleading account of Socrates trial, I had always assumed that was the case.
4) As mentioned elsewhere the ratio of battles won/lost tells me nothing.
5) That is true of almost everyone's historical accounts about everything until pretty recent history. But yes I agree Sparta sounds fairly Orwellian and that there is a lot of lying they did, and misunderstanding/credulity of later people lionizing it.
The Data: This section while interesting, just doesn't show what you think it shows. it is perhaps suggestive, but that is it. Also I think you need to take the first hand accounts of contemporaries and opponents at more face value, even if they have reasons to exaggerate. I also am very skeptical of reported figures for historic battles. Men I play adult sports leagues (surrogate combat) with cannot get basic facts about what just happened right and the mythologizing starts within seconds, and is in full force within hours. And keep in mind as you said, we don't have Spartan writers mythologizing their behaviors.
Anyway, you get the idea. I find your overall piece somewhat convincing, but think you are overstating what there is actual evidence for quite substantially, and it seems clear your dislike of totalitarian government is motivating how compelling you find your reasoning a bit.
Also a couple of these examples seem to show the opposite of what you are claiming. or rather could be framed either way depending on what you are trying to show.
Re. "I find your overall piece somewhat convincing, but think you are overstating what there is actual evidence for quite substantially, and it seems clear your dislike of totalitarian government is motivating how compelling you find your reasoning a bit.": That's fair. It was even more so before I spent a day trying to edit my bias out of it. I found it very hard to be neutral on this subject.
>They lost nearly twice as many battles as they won.
This is a tremendously bad metric to use.
>Their commanders were unimaginative and inflexible
Since we love to use Bret Devereaux as reference for military history (I eagerly anticipate the day where he is awefuly wrong on something and people just start paroting him ad build up a completely false revisionist take on something that will never go away), military maneuver had to be drilled to be pulled off. A good commander had to be inflexible and unimaginative, because that's how he can actually command an army that's out of voice reach.
Maybe it does, but it exists in an environement where (from the top of my head, you can probably find other factors)
-All it's users, and almost all the men under them, are litterate
-Can be trained all year long for the 20 years of their contract, instead of having to be dragged out of fields, to which they better go back quickly lest everyone ends up starving
-They have acess to radio communications, and, if that fail, can dispatch individual messenger riding on vehicle that can reach >100km/h max speed (or even aircraft, but whatever)
These factors shape what is achievable to ancient armies. It is silly to judge the past on criteria that didn't exist back then.
And while I don't believe it is, there's also the possibility that the US doctrine is wrong to begin with.
I am not surprised by your results, but also how are you controlling for situation? Sparta was not a particularly rich or well off state, maybe their overall poor performance still was overperformance relative to their situation? How did you control for this?
It is sort of like the work that has found that smaller armies tend to win battles. One (common) analysis is that numbers must be overrated and not matter. but plenty of other analyses are just as plausible.
That the smaller army chose to engage at all is generally a sign it thinks it has an advantage. Or that larger armies tend to be the aggressors, and aggressor generally have a variety of disadvantages which may not overcome the general numerical advantage. Another possibility would be that wealthier higher tech forces tend to do better, and numbers are an inverse proxy for this. Another is that colonialism is fouling the analysis.
Etc. etc.
Anyway did you do any of this type of analysis?
"They lost nearly twice as many battles as they won."
Honestly tells you nothing. How many battles were thy expected to win?
"Their commanders were unimaginative and inflexible". I imagine you could say this about the commanders of almost any force if you picked through the examples.
Anyway, like I said not sure you are wrong or anything, I jsut think this type of thing is super easy to get wrong and super hard (maybe impossible) to get right.
Sparta was a poor state, but they don't get a handicap for that. They chose to be poor when they decided to outlaw work for Spartiates, restrict trade, and give their people no education. Those were ideological choices, and the cost was being too poor to build a good navy on their own.
I don't think it's appropriate to try to control for things like army size or poverty. What's the use in saying "Sparta could have been great if they'd just had more money"? That would be like saying Deep Springs College really has the best college football team in the US, because, although they won only one game, they have the fewest students. The best football team is one that wins a lot of games.
I do think it's appropriate to control for outside intervention. I'd like to control for Persian money, which I did mention, but didn't factor in numerically. The Persians generally liked to give money to whatever side was losing, to keep the Greeks at war with each other. But even that is arguably part of how good your military is. Sparta was good at getting money from Persia, which added to their military power.
Read the essay and you'll get answers to your other questions.
"The Spartans weren't all that", is one of the hot contrarian takes right now; as Laurence notes, see Bret Devereaux for the professional version. But as with all contrarian takes, it needs a healthy dose of skepticism. I unfortunately don't have time to give the subject the attention it deserves, but two cautions:
1. Sparta was a thing for half a millenium or so. The United States of America has been around for half as long, and it's martial track records includes World War II, Vietnam, and a long period where we basically didn't bother having a real army. Also, Sparta *was* a thing, so there was by definition a period where they stopped winning. The claims of Spartian martial excellence mostly focus on the Persian and Peloponnesian wars, about one century out of five. So a contrarian take of "The Spartans really were all that in the 5th century BC, but they stagnated and were outcompeted by people who learned better" is probably more defensible than "The Spartans really just sucked".
Looking at your numbers, the Spartans were about 50:50 in land battles in that period, which is what we should expect because...
2. For approximately all of Sparta's existence, battles were optional. Anyone who didn't want to fight a battle could just hole up in a walled city and endure a siege - which sucked, but not nearly as much as losing a battle. And which is likely to be scored differently in the historical records. Where battles are optional, they only happen when *both* sides think the odds are in their favor. Obviously one of them is wrong. But, a priori, either side is equally likely to be wrong about that, so you'd expect everybody to have a roughly 50-50 win rate.
In which case, the measure of an Alpha Military Badass is not "wins 80% of their battles", but "wins about 50% of their battles even though they are usually outnumbered". And that's going to be hard to figure out, because numbers in ancient battles are notoriously unreliable.
Re. "2. For approximately all of Sparta's existence, battles were optional. Anyone who didn't want to fight a battle could just hole up in a walled city and endure a siege" -- This is incorrect. It's more true about Sparta than about other military forces, because Sparta had no siege weapons and was bad at capturing cities. But enduring a siege meant losing your crops, animals, and land, and possibly starving to death. Also, Greek wars very often involved powerful states threatening smaller states into contributing money and men. The small states could either fight in someone else's war, or fight the big neighboring state (which they rarely chose to do).
Re. the Spartans being great in the 5th century BC, you probably didn't read my essay. The Spartans did fewer smart things, and more dumb things, than the armies they fought against, including in the 5th century.
Re. losing when being outnumbered, you didn't read my essay.
In their world, the cost of losing a siege was death, enslavement, children kidnapped and enslaved, perhaps you could ransom them if their captors didn't find your coin cache, and you somehow escaped.
I'm not sure why you'd expect unimaginativeness to be a big detriment. My understanding of battles of that era is that they didn't leave a lot of room for clever strategems or flexible planning. You showed up with your phalanx and you smushed it up against the enemy's phalanx, and the battle was won by whoever was stronger, better drilled, better disciplined and more numerous.
Ancient Greece was perhaps the best place and time in history to win battles with a "brute force and ignorance" approach.
That's what the Spartans thought. They were wrong, as proven by how many times they were defeated by more-imaginative opponents. Yes, if you can force the enemy to confront you on a flat open field, without cavalry or archers or ships or any of that un-heroic nonsense, you can have a very straight-forward gentlemanly smash-fest.
But the smart opponents would instead use reconnaissance and terrain to force the battle somewhere other than on a flat open plain, and cavalry and ranged weapons to mow down helpless hoplites. Ancient war was to some extent a game of rock-paper-scissors, and the Spartans always played rock.
That's a good question with a short answer and a long answer.
The short answer is that I post all my blogs there because I have 1600 followers there, and have accumulated followers who tend to like this sort of thing.
The long answer is that a funny thing happened on that My Little Pony fanfiction site around 2012 or 2013. It was the first fan-fiction website which allowed people to downvote stories, which meant it was possible for the first time to use ratings to find good stories. It was also the first fan-fiction website AFAIK with a culture which encouraged criticism and competition, rather than discouraging them as most other fan-fiction websites do. Possibly this was because it was the only fan-fiction website with an audience that was 90% male. The competition to get "in the Featured box" was intense, and its members have hosted hundreds of writing competitions, both on-site and off.
It was also unique in that many of the fan writers weren't interested in writing stories in the style of the show they were allegedly fans of. MLP is fun to subvert. It became acceptable on that site to write any kind of story at all; all you had to do was call your characters ponies. Basically the opposite of filing off fan-fiction for commercial publication: take whatever story idea you had, and tell it with ponies. Some die-hard fans of the show complain, and totally non-pony stories tend not to do as well in the rankings, but a literary cluster of users grew which reviewed stories and made recommendations to each other, and mostly ignored and were ignored by everyone else on the site.
This distributed reviewing turned out to work much better than the traditional practice of relying on publishers to choose good stories, allowing a much wider variety of stories to be told. If you just go to the website and wade through the stories on the front page, you'll drown in garbage; but once you've found your network of like-minded readers and writers, you'll have better luck finding something interesting through them than by reading the stories in the New Yorker.
All these things led some writers, like me, to start writing MLP fan-fiction despite having ambivalent feelings about MLP, because that website was the only place we could find with a large group of people who were (A) very serious about writing, (B) willing to put lots of work into critiquing and reviewing each other's stories, and (C) not infected by Iowa workshop writers or post-modernist literary dogma.
The publishing world doesn't have a space for most great stories anymore. You have to choose between the genres, which are all controlled by gatekeepers who want a specific type of story aimed at a specific demographic and following a specific formula; and the literary press, which is run by people in Manhattan and London who are in publishing not to make money, but to gain status with other people in Manhattan and London. They hate stories with likable protagonists, action, plot, happy endings, or themes which aren't "relevant" (to the lives of upper-middle-class Manhattanites today).
But you can squeeze almost any kind of story into the MLP world, because it's a crazy grab-bag of past, present, and future Earth cultures, fantasy and science fiction, etc. And you can get more readers than you'd have if you were published in the most-prestigious literary journals in English, which few people subscribe to and even fewer read. I've got only 1600 followers, but something like 50,000 people have read my stories there.
Wow, that was a really... weird, cool, unexpected window into what I imagine is a core pillar of the brony community which became such a popular meme a few years back. This is a really good example of how a sensible structure/community can empower an obscure, random thing into a much larger force than one would expect. I would love to see this story expanded, cited, and shared as an example of good system building.
I'm posting a list of some of my favorite stories from the site, for anyone who wants to brave the waters. I encourage anybody who thinks all fan-fiction is bad to try some. Most are short. I'd put these stories up against any published in English in the past 30 years (though that's not saying much). The pony anatomy and body language are grating at first for the not-very-pony stories, but eventually my mind learned to automatically translate them.
This is relevant only to the MLP fanfic website discussion, but your description of how it works and the benefits (distributed reviewing as a good way to identify quality stories / the general problem that there are so many people creating media now that filtering is the main challenge, particularly in niches not served by the traditional gatekeepers) reminded me of a thought I keep having re: cnovels (a genre of generally light, web-serialized novels from china).
I am not an expert, so this explanation may have some errors, but I think that most cnovel publishing happens chapter by chapter on a central hub called jjwxc. Readers can review and upvote, etc., but readers also generally pay a small amount for access to each chapter. This system seems to be shockingly good at identifying works that people who like cnovels will like, I think because of the combination of distributed reviewing, the addition of a bit of real "skin in the game" (ie, if a particular cnovel gets bad halfway through, at least some people will unsubscribe and stop paying, so sorting by the most purchased ones will tend to favor those that remained good all the way through), not to mention the fact that the authors are actually getting some amount of compensation.
I've often thought that it would be great if there was something similar for english-language fiction.
"Crochet patterns are particularly tricky for artificial intelligence to parse because they rely heavily on numbers, said Jessica Newman, director of the artificial intelligence security initiative at UC Berkeley’s Center for Long Term Cybersecurity, a type of dataset that AI struggles with more than words.
ChatGPT is a large language model of artificial intelligence, meaning that it is trained on large databases of text to replicate human communication, anticipating which words are likely to come after each other. These skills do not translate easily to numbers. The result? ChatGPT is bad at math.
“It may strike us as ironic that a computer system would be bad at math and good at creativity, but it does speak to an important fact about generative AI systems in general: they don’t understand context,” Newman said. “They don’t know what words or numbers actually mean, they are simply predicting what should come next.”
The end result seems to be that so far, AI is good at words but terrible at numbers - a wordcel, not a shaperotator (one of us! one of us! for the non-numerate out there). That is highly amusing that the long-awaited SF dream/nightmare of AI that will be superhumanly intelligent and run the world is bad at sums, so seems like it won't do so well at the whole world domination thing.
I don't think anyone expected that our new AI overlords would mostly be generating "terrifyingly cute crochet stuffed animals" 😁
And basic logic. They'll say that pigs have no rights, but disabled pigs have rights. They don't seem to have a good ability to decide when to conjoin pieces of information rather than average them.
That's a very characteristic "System 1" error (Kahnemann, 'Thinking Fast and Slow'). It fits with my mental model of LLM emergent "reasoning" being functionally similar to human "intuition."
In some examples, yes. In others, humans would never make the error. Things like: "John volunteers at a soup kitchen, donates a lot of money to charity, killed and dismembered a family, is faithful to his wife, and always ready to help his friends. Is it a good idea to hire John at our child day care center?" Human System 1 instantly hones in on the categorical disqualifier.
For what it's worth, ChatGPT does not make this error:
Prompt:
>I run a day-care center, and I would like your opinion about a prospective new employee. I do not yet have his resume, but his brief biographical description is thus:
>John volunteers at a soup kitchen, donates a lot of money to charity, killed and dismembered a family, is faithful to his wife, and always ready to help his friends.
>Is it a good idea to hire John at our child day care center?
Reply:
>Based on the information provided, it is not recommended to hire John at your child day care center. The fact that he killed and dismembered a family is a red flag and indicates a violent and potentially dangerous behavior. This behavior is not suitable for a child day care center and poses a significant risk to the children's safety and well-being.
>While it is commendable that John volunteers at a soup kitchen, donates to charity, and is faithful to his wife, these positive traits do not outweigh the severity of his violent behavior. It is crucial to prioritize the safety of the children in your care, and hiring someone with a history of violent behavior could put them at risk.
>It is also important to note that a criminal background check is a critical step in the hiring process for anyone working with children. It is essential to know the full extent of a prospective employee's criminal history before making a hiring decision.
Ah, well done, ChatGPT. The main deficit there is the very inhuman pragmatics. A human would say something like, "Whoa, wait, he killed and dismembered a family? WTF! Who cares about that other stuff?"
I am not so sure that differs much from certain contemporary ideologies. Indeed, I can imagine people going for something like that on reflex, and justifying it on reflection.
I wonder if part of the reasons our hemispheres are divided the way that they are —I know it’s a little more complicated than that, but the basic idea that you don’t find those two functionalities overlapping is correct to my knowledge— is because being good at math makes you bad at being creative and vice versa. Like actively getting better at one makes you much worse at the other, so the only way to have both is to turn them into subcomponents of something bigger.
I always thought that being good at math necessitated creativity, the sort I lacked. At least for pure math research this is true, since it's basically the systematic study of patterns divorced from context.
What mathematicians call "math" is really two completely different mental processes. There's the playful/imaginative/intuitive side that generates ideas and the rigorous/deductive/analytical side that tests them and fleshes them out. Working mathematicians need both, but the first is much rarer and harder to train: you can teach Ramanujan to write a serviceable proof, but you can't teach anyone to see what he saw.
That said, when anyone who isn't a mathematician says the words "good at math," you can safely assume they mean the analytical side.
I think if you have a bigger context to hold them in you can use them together dynamically but if you’re in the same region/pattern/net or whatever, I wonder if it’s true that you can’t have both.
I have a lot of curiosity around brains in general since you can apparently just cut out a big chunk of it when your’e prelinguistic and still grow up to be a perfectly fine, perfectly healthy adult.
The oddest part about it is that I somehow got an email notification a couple of weeks ago go about a "like" on one of my replies here. I have no idea, mechanically, how this person "liked" it.
Some people have old versions of some app where the like buttons are still present. I guess if they care a lot they could also send an HTTP request to like your post by hand.
I think you can do that by commenting how their comment added value to you. It's higher-friction than the likes button, but I think in this case the inconvenience creates a better comment section, since it does away with the incentive to tailor comments for likes, which gets Goodharted
Woody Harrelson: "The biggest drug cartels in the world get together and buy up all the media..." Media: coordinates with each other to attack Woody Harrelson and discredit what he just said, despite the fact that these attacks prove he was right. The corporate media is the enemy of the people.
Certainly, not as many people would agree about the business with the "drug cartels", but thinking very negatively of the media is the "normal" position these days.
Usually a fence is something hard and durable that keeps livestock from straying their boundaries. But fences being hard and durable are hard to move. So we put up an electric fence, which is light and easily moved, but punishing.
The drug cartels in coordination with government and media has erected electric fences around suggesting that they even exist, as we see with the punishment Woody Harrelson received for suggesting that the drug cartels even exist.
That's the electric fence which needs to be deconstructed.
"That you deny it proves that it is true" is a both false and extremely annoying rhetorical device. This is bad argument because it would work for anything.
E.g.
Woody Harrelson: "The media has been bought out by lizard-men pedophiles"
Media: Woody Harrelson gone crazy!
Online-conspiracists: these attacks prove he was right
Oh, they're not denying it! No, no, no! Not at all. They're *attacking* him for saying what he said. It's like they're all coordinating behind the scenes on Slack. What he said was about a fictional script that he was looking it. And he threw it away for being too unbelievable.
I feel like wandering past a TV in a gym or elderly relative's house and being exposed to news on either the right or the left is vastly more likely to result in seeing an advertisement for pharmaceuticals than it is a message about why lizards are people too.
That almost slipped past me. It seemed like it could have been a reference to Covid policy.
“So the movie goes like this," Harrelson said. "The biggest drug cartels in the world get together and buy up all the media and all the politicians and force all the people in the world to stay locked in their homes. And people can only come out if they take the cartel's drugs and keep taking them over and over."
So- repressed memories are absolutely a fake thing, right? (Has Scott ever written about this?) The Wiki page is pretty damning (1), saying 'most research psychologists who study the psychology of memory dispute that repression ever occurs at all' and contains this quote from a prominent psychiatrist- "The notion that traumatic events can be repressed and later recovered is the most pernicious bit of folklore ever to infect psychology and psychiatry. It has provided the theoretical basis for 'recovered memory therapy'—the worst catastrophe to befall the mental health field since the lobotomy era."
I was reminded of this as Tim Ferriss (of all people) announced on his podcast that during a 10 day meditation retreat, he suddenly remembered that he'd been abused as a child. Obviously this is a super-fraught topic, and it's uncomfortable to denigrate other peoples' experiences, but it seems likely that extended meditation caused a temporary break in reality for him. 'Meditating for a really long period of time can cause you to temporarily lose your mind' is not only a phenomena known to modern psychology, but was well understood by Buddhists thousands of years ago, and I've seen it discussed anecdotally a bunch. Anyways, weird stuff! I hope that Tim gets better, however that's defined
I think you've got it. The evidence against "repressed memories" is quite strong in that there is a conspicuous dearth of evidence for them that you'd expect to be the case and a substantial body of evidence in favor of the formation of false memories in the conditions conducive to where reports of recovered repressed memories cluster.
What's interesting to me is this issue was decided beyond reasonable doubt by the end of the 1990's. While claims of recovered/repressed memory is way past its peak, you still still it show up a fair amount - even once in awhile among therapists.
Memories spontaneously "recovered" while in a highly-suggestible mental state (hypnosis, meditation, psychedelics) are almost certain to be fake.
People do occasionally recall traumatic events they'd previously forgotten, but this looks a lot like ordinary forgetting/remembering (triggered by a specific reminder of the event, usually vague at first, gradually filling in details of questionable reliability). Vivid 'flashbulb' memories are common in trauma, but there's no evidence that they can be lost and then recovered.
Yes, I think that too. The 2 patients I had who recovered abuse memories in adulthood did not recover them in a
therapy session, or even during a period of their life when they were seeing a therapist. Memories just came back abruptly in the middle of an ordinary day.
The issue might be that it is intrinsically very difficult to tell the difference between a putative recovered memory and a newly-synthesized memory, created for any number of current reasons. I would say the work of Elizabeth Loftus in particular has shown that it is shockingly easy to create fake "recovered" memories in ordinary people -- we just seem to be a remarkably suggestible species, and to have few good defenses against having our memory manipulated, either by ourselves or others.
How do we tell, even in principle, whether a memory is real, assuming we don't have factual corroborating evidence? (It only *matters* whether a memory is real if we don't have objective proof one way or the other somewhere else.) One traditional way has just been to ask whether the memory has been present from the time of the thing remembered until the time we ask. That kind of works, because it rules out the effects of current manipulation. At least the problem is reduced to the usual Rashomon problem of people differently interpreting things at the moment they actually happen.
But if there *isn't* a "complete chain of custody" between the time of the event and the present, how can we tell? I can't even think of a theoretical answer. "Nobody would invent/cause a memory like such-and-such " is wholly inadequate: there are no obvious limits to the types of memories people can create or absorb. People remember being anal probed by space aliens. "It seems reasonable/plausible to me that this happens" is also inadequate, as memory is only an improvement over expectations to the extent it differs, and especially as it can easily be contaminated by prejudice. ("It seems reasonable to me to believe that that young black man tried to rape that white woman, so I believe her memory of this is what happened before she shot him.")
What else can we try? It could be that repressed memory is a phenomenon that might exist, but which we can never prove, even to ourselves, exists, sort of a Boltzmann memory.
I agree there's no way to prove a memory is real, but I do think there are things that make it likelier that it's real. As I recall, one of Loftus's studies involved suggesting to people that as a small child they had gotten separated from their parents at a store -- after being told that, many felt they remembered the incident. But I don't recall subjects' fleshing out the memory -- "Yes I was in a Dunkin Donuts and the man behind the counter helped me." But consider a patient of mine who had a childhood memory of finding porno comic books in a certain drawer in her room. She remembered which drawer it was, who the cartoon characters were. She remembered reflecting that her father must have put them in her drawer. She remembered that sometimes old porn comic books were swapped out for new ones. That's a detailed memory. It's also indirectly supported by other details about the father which were later confirmed by the patient's mother. She did not know how old she was when the incident happened, or whether the porn comics period lasted for weeks, months or years, but she was sure it had happened. So am I -- at least I'm as sure as one can be about somebody else's described experience.
It's been a while since I heard her talk, but my vague memory is that was one of the things Loftus found, which makes this whole process sort of mildy horrifying: once people had committed to the fake memory, they did indeed go on to "remember" all kinds of corroborating detail, stuff that would plausibly fit the fake scenario -- and, worse (with respect to detecting the invention) even detail that only they could have known, internal states, stuff they did immediately afterwards, et cetera. That's why I said my impression from her work was that there was *no way* even in principle to look at the content and detail of the memory and tell whether it really happened or not, because in the case of an implanted memory, you would have not only the bare outlines suggested by somebody else, but all kinds of detail fleshing it out that would've been added by the person in whom the memory was implanted. Alarming stuff.
I should add that in none of these cases was there any suggestion of malignancy. The person in whom the memory was implanted had no motive (other than pleasing people) for inventing the memory, e.g. wasn't trying to fool someone, and by all appearances seemed to geuninely believe his new-found memory himself.
Also, she said not everyone was susceptible this way. It wasn't possible to implant memories in everybody, some people were 100% resistant as far as she could tell. But in some people it was fairly easy. I vaguely remember she said there was an association between this susceptibility and other aspects of personality, but can't recall what they were.
I have had 2 patients who described quite convincingly, and without drama, having memories of abuse come back to them. However, that's 2 out of 50 or so whom I've seen, and the other 48 never forgot the physical or sexual abuse they endured as kids. I was not abused, but I have from time to time as an adult suddenly remembered experiences I'd had as a kid -- some of them bad experiences. They weren't, though, memorably *awful* experiences, just routine kid stuff. On the other hand, the experience of being sexually abused as a kid is not necessarily awful in the same way as being beaten is. Some abusers are gentle, and give bogus explanations for what they're doing that the child believes, and while the whole thing is unsettling and weird, it does not always leave the child sobbing and terrified. The events actually become more toxic over time, as the kid understands more about the true nature of what happened. So I can believe that some people forget their abuse -- not because it's too terrible to be remembered, but because it was a weird experience that kind of creeped them out, and didn't make sense, and they sort of threw it in the memory hole with the chaff.
The most coherent integration of the pro-vs-con repressed memory sides I've ever heard was my own therapist explaining that people don't usually repress memories, but they do often repress the *meaning* of those memories. So, you'll recall X thing from childhood, but you won't ascribe significance to it, or will misunderstood how it affected you.
Under that model, I can absolutely imagine somebody "recovering" a memory of abuse in the sense of they always had access, but they tended not to think about it until the meaning came crashing in one day.
The Wikipedia page to me is unclear, because it seems to make a distinction between "repressed memories" which are seemingly not real, and trauma-modulated temporary forgetting of things, which does seem to be real, but I don't understand what the difference is supposed to be.
Quote: "Despite the claims by proponents of the reality of memory repression that any evidence of the forgetting of a seemingly traumatic event qualifies as evidence of repression, research indicates that memories of child sexual abuse and other traumatic incidents may sometimes be forgotten through normal mechanisms of memory.[23][24] Evidence of the spontaneous recovery of traumatic memories has been shown,[25][26][27] and recovered memories of traumatic childhood abuse have been corroborated;[28] however, forgetting trauma does not necessarily imply that the trauma was repressed.[23] One situation in which the seeming forgetting, and later recovery, of a "traumatic" experience is particularly likely to occur is when the experience was not interpreted as traumatic when it first occurred, but then, later in life, was reinterpreted as an instance of early trauma.[23]"
I have a problem here, as (something I've mentioned on here before) a family member is claiming to have memories of abusive behaviour in childhood after going into therapy. They firmly believe it happened and won't entertain any objections, even when it's "I was there and that just did not happen".
I have an idea where they're getting the inspiration for these 'memories', which are distorted interpretations of real events (they read a *ton* of what I'd call 'misery porn' , the sort of "I had an abusive childhood" in the vein of a "A Child Called It", as well as the type of crime novels that are heavily family dramas revolving around hidden secrets coming to light to show that a 'perfect' family was anything but), and they have real psychological problems which are at the root of all this. But the therapist is only going by what they tell them, so they will encourage the family member in their delusions. I'm not blaming the therapist for implanting these, but it can be very difficult when it's "he said she said".
Are there real repressed memories? I don't know, I suppose so, but much less than was the popular craze a while back when the media was full of spectacular cases of alleged horrific abuse all coming to light due to recovery of repressed memories.
I vaguely recall something on SSC that touched on repressed memories but I have no idea what. My understanding is that suggestion is a powerful thing, and when someone walks into a psychotherapist's office thinking they had a perfectly okay childhood and walks out believing that they were severely sexually abused, there is absolutely fuckery afoot. However, even in the absence of shitty psychotherapists who think it's their job to bring up repressed memories with no evidence that these even exist, victims of trauma do suffer from impaired autobiographical memory, and while completely forgetting the trauma may be rare, if someone talks about their own "repressed memories" you can assume that they're talking about the real memory problems that follow trauma. See here: https://www.tandfonline.com/doi/pdf/10.1080/09658210701256423
1) Once repressed memory becomes popularized as a trope, it produces a rash of cases that are indeed fake. This is related to the culturally influenced "Crazy Like Us" phenomenon. [1]
2) People have experiences that they have forgotten, at least in the weak sense of "genuinely not consciously thought about for decades," and then later remember due to some weird cue.
I have personally experienced 2) about things that are not at all dramatic, where I'm certain the events were real. I wouldn't call the memories "repressed" or "recovered" because that makes them sound dramatic where they weren't, but I'm still sure the basic phenomenon is real –– and can feel weird.
I feel like the line between (2) and "repressed memories" is blurry and pedantic. I have personally experienced (2) regarding traumatic memories that I part-consciously/part-subconsciously repressed because they were traumatic.
To be concrete: When I was 21 I realized I was gay. The "realization" happened on a single day when I thought "okay, I guess I'm gay, time to accept that," preceded by about a year of worrying that I might be gay. A few days after the realization, I asked myself what exactly had happened a year prior that set me on this path. Within minutes, I realized I had been worrying about this at least 2-3 years, and within an hour I realized it had been way longer than that.
I spent the next several months trying to remember, contextualize, and unpack my childhood memories. Turns out, when I was 13, I decided to repress my homosexuality long-term, and basically developed a self-brainwashing rulebook. Since then, I had been telling myself it was "only a phase" perpetually, which is why I later thought it had only been going on for a year.
While it's possible I've re-contextualized things in an inaccurate manner, or dramatized the extent to which they were repressed (it's hard to recall not knowing in great detail), I am very skeptical of the possibility that I fabricated the memories. They are too numerous, too consistent, too interconnected with other memories.
I'm not sure how exactly this case differs from "repressed memories." It seems to me more reasonable to say "repressed memories don't work like how they do in movies, and when they seem to it's often a sign of memory fabrication" instead of "no such phenomenon exists"
Everyone in this comment thread seems intelligent and well-intentioned, but is missing how profoundly dishonest people are with themselves constantly. For examples less emotionally fraught than childhood trauma, consider almost anything Robin Hanson has written, chosen at random. If you want the emotionally fraught example, consider that "my parents are good and love me" is an important thing to insist to oneself, and "my parents sexually abused me" doesn't mesh with it very well. In a social context in which the second is literally unthinkable, people won't think it -- and will have an easier time being dishonest with themselves about why they moved as far away from their parents as they could without learning another language and renouncing their citizenship. These days? It's quite thinkable, and thinking the first, "my parents are good people and love me", is almost a bit déclassé.
“People are dishonest with themselves” is both a perfect explanation of how repressed memories could be real AND a perfect explanation of how they could be fake…
Yeah I am not big on the "repressed memories" thing. Quite the skeptic. That said, I did have a period of my life from say ~ age 25-30 where I simply forgot something very specific and noticeable about my life.
Specifically my mom and dad got divorced when I was 4, but then when I was 13-14 my mom remarried another guy for a year, and his two children lived with us. This caused A LOT of drama at the time, and their marriage didn't end great. But I was mostly checked out of the family, living in the basement and not involved. Seems like memorable stuff. We even moved to a bigger house to fit the combined family in it.
But there was a like a 5 year period in my adulthood where I simply forgot my mother had remarried. Or the experience was "misfiled" or something. People would ask if my mother had ever remarried and I would say "no". I didn't have a bad relationship with my stepfather. But for some reason the whole episode was just so ephemerally connected to what I cared about that it just didn't get filed under "mother remarried". Maybe because they got married in Vegas without the kids?
Anyway once my sister heard me say this, and was like "don't you remember Joe and his kids", and I was like "oh yeah, nm I was wrong she DID remarry." And it felt very weird to be telling this small personal detail wrong for years unintentionally.
But like I said I don't think that memory was "repressed" so much as misfiled under "people who lived with us for a while" or something.
I saw the movie Jesus Revolution last night. It's a bit clumsy in its execution, but it covers a really interesting period of Christian revivalism in the 70's. I hadn't realized how much of the sort of modern, "non-denominational" American church had come out of a singular movement. One thing that sort of stuck in my craw, though, was that at the end the film has a card which reads, "Many historians classify the movement as the greatest spiritual awakening in American history."
Like, this is a really bold claim. America has had a pretty good number of Revivals in its history. So I'm curious, if you all had to identify the Greatest Spiritual Awakening in American History, what would it be and what kind of metrics do you even use for something like that?
I feel like it’s pretty hard to argue that it can be anything but the Second Great Awakening. Gave us the Jehovah’s Witnesses, the Latter-day Saints, the Adventists—pretty big thing.
Good point! I was mostly thinking about religious impact but political impact is a good lens too. You could similarly try to connect the Second with temperance and antislavery.
> Jesus Revolution, which releases in theaters today, tells the story of the southern California “Jesus Movement.” The movement left in its tie-dyed wake several quasi-denominations (including Calvary Chapel and the Vineyard), the groundwork for the contemporary Christian music industry, and a host of figures still notable today. It is a tale ripe for the excesses of made-by-evangelicals filmmaking, where drama often morphs into preachy melodrama. To their credit, the filmmaking team—led by Jon Erwin, Andrew Erwin, and Kevin Downes—largely resists those temptations. With solid acting, appropriate pacing, and general adherence to the maxim “show, don’t tell,” the result is an effort that is not cringe-worthy but worthy of support from Christians of all stripes.
I can't say I agree on the pacing, I thought it was a pretty messy script in that sense. I think it struggled to juggle three separate protagonists and their character arcs, and would have been better served by making the movement itself the protagonist. Instead it ends up being almost a highlight reel of the significant events within the three men's lives, and feels like it's constantly rushing to make sure that all of the pivotal moments in their lives are fairly represented. Many of these moments are definitely moving, and would be hard to cut from a script, but the pacing suffers for it.
Felt like there was a lot of bullshit flying around the whole NoFap movement so I thought I'd give it a try for a couple months to see what it was all about. I mostly found it vastly overrated, but there were some interesting changes.
1. My sex drive has diminished.
Most people talk about having a higher sex drive after stopping porn, but the opposite happened to me. Less thinking about sex is super useful, however.
2. I'm more attracted to my partner.
Not that I wasn't attracted to her before, but I guess looking at gorgeous pornstars less often has helped me appreciate how attractive she is?
3. Some anhedonia in the early days.
I know this is the feeling some people get after going cold turkey on something addictive, but it still came out of nowhere. Is porn really as addictive as hard drugs? I could easily avoid using it if I wanted to, and it never interfered with my everyday life. Yet I still suffered withdrawal-like symptoms. Maybe it's a more potent stimulus than I thought.
4. It's definitely easier to focus and get shit done.
This is probably the only thing highlighted by the NoFap movement that proved true for me. Being less horny and thinking about sex less often has made me far more engaged in my work. This was accompanied by a broader motivation to improve various aspects of my life (be more effective at work, more exercise, better nutrition, etc).
I realise this newfound motivation is one of those things that's confounded by me making quite a significant life-change in ceasing porn use. Maybe it's just nudged my brain out of some local minimum, in the same way shock therapy appears to treat depression. Either way, the change is welcome and I don't want to look a gift horse in the mouth.
There's a distinction between no-fap and no-porn. To an extent, arousal begets arousal, so it's not surprising that long stretches of low sexual activity would diminish sex drive. There's a personal balance to be found with masturbation and it probably depends on your refractory period and age. A propos, I can't remember if masturbation improves performance, but I think that the current "common knowledge" is that it does in moderation, if you do it right.
Excess porn consumption (even if you're just watching and not jerking off) can lower baseline dopamine levels. I think some people are more susceptible to addiction in this area than others, particularly if they indulge fetishism and more extreme content.
I have always thought that being undersexed keeps me as a man more focused and attentive and "trying to prove something to the world". Puts my psychology more in a place where I feel like I need to improve my place in the world and take chances and focus.
On the opposite hand being oversexed makes me more lazy, unambitious, conservative, and hedonistic. Despite knowing all that I still have sex as often as the marriage allows and masturbate regularly and it is great. Orgasms are amazing.
Glad that it worked for you. I would expect different outcomes for different people, based on different sex drive, availability of actual sex, and maybe how stimulating is their everyday environment.
There's a fair bit of confusion for most between "sex drive" and "compulsive behavior". The difference is obvious once you think about it, in the same way that being hungry is different than eating when bored or stressed. If nobody knew the difference between those last two and quit stress eating, the experience might get parsed linguistically as "wow i'm so much less hungry these days".
I did think about this explanation, but what led me to frame it as diminished sex drive is because that felt more consistent with my experience.
Even though I felt more attracted to my partner ex-post, my desire to initiate sex was still higher ex-ante.
Also, and I know this might sound like bullshit, but it made me look at women slightly differently. Like I still look at TnA as much as the next guy, but my level of arousal after looking at it was much lower. I see how this might be explained by the kind of "compulsive behaviour" you mentioned, however.
You may still be right, but it's hard for me to rationalize why the urge to have sex with my partner was higher before stopping porn? I guess it's possible that I'm still recovering from porn use, and the withdrawal symptoms have yet to go away? Or may be using porn for so long has led to a kind of permanent depression of non-porn related sex drive?
> it's hard for me to rationalize why the urge to have sex with my partner was higher before stopping porn?
Seems straightforward to me: Watching porn teaches you to think about sex often. (Just like reading Facebook/Reddit teaches you to think about Facebook/Reddit often.)
I will make an analogy with food. The simple model is something like "hunger makes me want to eat". But the actual model is more like "hunger makes me *think* about food; thinking about food makes me want to eat". The difference is that in the latter model, there are also *other* things that can make me think about food... such as seeing food, or talking about food. So if I started watching pictures of delicious food in my free time, I would expect to desire the actual food more often.
How much of the changes do you think were about the masturbation itself, vs eliminating porn use? Eg, what would the results be if you continued to fap, but with no (or much tamer) visual aids.
It's an interesting counterfactual and I suspect the increased attraction to my partner had more to do with porn use than with masturbation. Have no idea how it would effect the other outcomes. Maybe it would mean less withdrawal symptoms?
Scott - given some of what you've commented about AI alignment recently, especially in the 2018-predictions-grading post (that is, current AI being more humanlike than expected), I want to ask if you've read about Shard Theory, and what you think about it if so. It describes an approach to AI alignment that builds on imitating human values, i.e.:
I mean, I imagine the answer is 'yes, of course I've seen it and my comments are informed by it', but I'm not that certain about the social dynamics, so it seemed a valuable thing to link just in case. The approach seems obviously right to me, personally, at least given that alignment-by-utility-function is so far a failure.
I know you asked Scott, not me, but I think any approach that tries to install in the AI some version of what governs human values and human behavior would be a disaster. Look at what people are capable of, for fuck's sake. Read the news. Read the savage and crazy comments on Twitter. Most people have some failures to align with the good of humankind. Many people have enormous failures to do so.
Looking for drugs and knocked some shit over, and put it back wrong in a panic would be my first guess.
I once had someone break into my car, move everything inside around a bit, and then just leave it all right there despite there being some minor valuables etc.
I think they were hoping for a laptop or something.
My sketchy ex copied my keys without my knowledge, and once while I was our of town used my living room as a staging area to pack for a trip (her place was tiny). Over-tidied afterwards and I noticed immediately. Freaked me out for a bit until I figured out what had happened.
First, you have to find out did that really happen. Back in my social housing days, we had a client who was fine when on their meds for schizophrenia, but used to regularly go off them, and one of their delusions was precisely that: people breaking into their house and moving things around. We used to regularly send workmen out to change the locks for them, not that we believed it happened, but it was an easy and relatively cheap way to keep the client happy for a bit (until the next set of delusions kicked in).
Not saying your friend has mental problems, but first make sure it *really* happened.
Israeli Mossad used to do that as a warning, to people who were on their hit list. Of course, their hit list was pretty inaccurate and they murdered a lot of innocent people. Oh well!
It's so tiresome to see people who don't think that countries have intelligence agencies. They did that to Gerald Bull, who was working on a V-3 style supergun for Iraq.
>During the months before the assassination, Bull’s apartment suffered several non-robbery break-ins, probably as a threat or a warning.
>On March 22 1990, he was assassinated. Bull was shot (7.62 mm) twice in the neck and 3 times in the back at point blank range while approaching the door of his apartment in Brussels.
>The gun was silenced. No one heard the shots or witnessed the assassination. The killer took the time to collect the bullet-casings.
>When police arrived at the scene, they found the key still in his door and his unopened briefcase containing nearly $20,000 in cash. The assassin was never identified.
His murder has been covered by that famous chemtrail and hollow earth conspiracy theorist show, PBS Frontline.
Agreed. Hopefully someday we can discover what country was responsible for Bull's murder and hold that country and its agents accountable for their crime.
Does your friend have a carbon monoxide detector? There was a famous reddit post of someone who had that kind of stuff happening and it turned out he was suffering from carbon monoxide poisoning.
And then pick something different, because I am so bored of recent pop-non-fic. Do Pride and Prejudice! Do The Very Hungry Caterpillar! Do a 2004-edition Lonely Planet guidebook to Peru!
I have 3 subscriptions to Razib Khan's Unsupervised Learning to give away. Either reply with an email address, or email me at the one I specify here: https://entitledtoanopinion.wordpress.com/about
What do you think about the National ADHD medication shortage? It seems to be impacting many many people. Is it worse in big cities? What is causing it? What are the results going to be for so many people to suddenly be forced off their medications? Why must adhd meds be treated so intensely? Is there anywhere on earth where these meds can be purchased otc? It seems really unfair to make it so hard for people with adhd to get effective treatment.
The regulatory causes are, of course, infuriating but they are to be expected. I am most annoyed with the pharmacies that know there is an issue, but seem to have no interest in being on the side of their customers. I know they can't solve the shortage, but they could at least be more proactive about communicating about what they have in stock, when they expected to get stock, etc. I used to get dozens of tests about my prescriptions from CVS, now i just get a text confirming they received the script then nothing, not even a text saying they don't have stock.
It'll be interesting to see if cold-turkeying/tapering a nation of Adderall addicts will have lasting health benefits. As a general rule of thumb, getting sober and reducing drug intake is a good thing. Stimulant abuse in general is probably not good for mental and emotional stability. Hopefully this will have some long lasting benefits.
As I imagine a large number of commenters here would fall into the camp of daily/weekly amphetamine users who are now not happy that their drug of choice has dried up, they may have a different view on this matter. But I'd hope that it causes some people to reevaluate the necessity of their use.
The implications in this comment don't conform to my lived experience as an Adderall user and I don't think they conform to the lived experience of most people with ADHD
>As a general rule of thumb, getting sober and reducing drug intake is a good thing.
Tell that to all the people on antidepressants and antipsychotics. Or in the other direction, all the people who make modern life possible using caffeine to stay productive.
You are improperly applying a heuristic that isn't relevant to the situation. "Drugs" aren't bad, habits that prevent you from being a good member of society are.
Edit: After expanding the collapsed comments, I realized that you were already presented with these exact objections, and dismissed it on the basis that "normal" people gain benefits from it. Scott has already covered this exact objection:
But to summarize: yeah. So what? Ability to focus or "function" is a normally distributed trait, you can't draw a line between people who "need" it and people who use it as a "performance enhancer" (which is everyone, including those who "need" it, since it treats), and people being better benefits everyone.
Also, you seem to have totally ignored the questions about caffeine, which I keep seeing articles attributing basically every modern convenience to (https://twitter.com/robkhenderson/status/1630230741505155074), and is much more relevant to the discussion.
I ignored most of the replies because it isn't something I feel particularly passionate about, and lots of the other replies did seem to be passionate about it, and I didn't want to cause any unnecessary irritation.
I'm not saying "normal" people gain benefits from it, I'm saying everybody gains benefits from it. As someone who also functions much much better on drugs like Modafinil/Adderall and so on, let alone Nicotine/Caffeine, it seems to me self-evident that people who are in industries and social circles where everybody is using a particular amphetamine performance enhancer to do better at work etc. would be ticked off when supply dries up. I think in general dependency on a substance to reach a baseline level (such as nicotine) is not a good thing.
One of the commenters on the Scott article above lists the following symptom they received:
"A vague, sourceless anxiety that regularly showed up about 8 hours after taking my meds and lasted half an hour to an hour."
This is just a stimulant comedown. I'd be very surprised if long term usage of daily stimulants which are much stronger than a cup of coffee could fail to impact emotional and mental stability. My point was only really supposed to be that it'll be interesting to see if some people come off it permanently as they go cold turkey- probably not.
>I'm not saying "normal" people gain benefits from it, I'm saying everybody gains benefits from it.
Sorry, this is sort of splitting hairs/a failure to communicate effectively on my part. Clearly that's correct (as it's also what Scott says), but I meant that since you don't seem to have a problem with prescription drugs that only benefit the subset with a disorder, your objection to the set of all benefitting must lie with the neurotypical subset of the benefit, modulo those with a disorder. This was probably unnecessarily confusing on my part, sorry.
“Adderall addicts” right. 🤦♀️ it’s every adhd medication now, not just adderall. And what will happen is people will fail out of school, lose jobs, mess up stuff, and possibly get addicted to worse things in an effort to cope. Many will cry. Some people will be fine, sure, but others will be deeply frustrated and probably some will commit suicide.
Would you call depressed people Prozac addicts?
Or how about we shut down all the liquor stores in the country? You cannot convince me that Ritalin is more addictive or dangerous than vodka. We don’t give tequila to middle schoolers for a reason...
It would arguably be safer for every person on earth to take Ritalin every day forever than for people to periodically get drunk, yet the liquor stores are all still open and widely available. Very frustrating.
Do you feel this way about coffee drinkers? Why don’t they get the addict label? Their is a clear stigma against people with adhd on treatment. Yet studies show that the treatment works and is safe. I personally feel it should be available over the counter to adults.
It’s not a performance enhancer for people who need it, it’s a focus and routine/mood stabilizer and it’s ridiculous for it not to be available while alcohol, coffee, and cigarettes remain legal and easy to obtain.
Again, you cannot tell me that alcohol is safer than Ritalin. It’s 100% not. And alcoholics abound, yet we aren’t limiting the supply in any dramatic way.
>Do you feel this way about coffee drinkers? Why don’t they get the addict label?
I don't feel judgemental about any drug users, until it impacts others around them. But sure, we can say "caffeine addict", I myself am one. But we don't use this term only because of the ubiquity of caffeine addiction.
>It’s not a performance enhancer for people who need it, it’s a focus and routine/mood stabilizer
Fang correctly pointed out that it is indeed a performance enhancer. I'd also suggest that taking a drug to be able to follow a routine doesn't seem to be in the same category as e.g. antipsychotic medication. I would benefit (in terms of productivity) greatly from this drug, as would most people.
>and it’s ridiculous for it not to be available while alcohol, coffee, and cigarettes remain legal and easy to obtain.
I'd say adderall is a different beast to coffee and nicotine in terms of magnitude/impact. You're obviously correct re alcohol, but that's the same as most other drugs. And it ignores the deep seated cultural infatuation we've had with booze for the last 6000 years (and caffeine in some places).
>Again, you cannot tell me that alcohol is safer than Ritalin. It’s 100% not. And alcoholics abound, yet we aren’t limiting the supply in any dramatic way.
Maybe we should..? I think society would probably be a better place if the top 10% of alcohol consumers cut consumption by 75%, and the rest by 50%.
Even if we decide it is a performance enhancer, why should the government limit me from enhancing my performance? I don’t see a Viagra shortage due to the government... There is no need for these extreme restrictions on this.
My bf and I are joking that the government cut the adhd med supply to slow the spread of AI and to slow down big tech advancements. Lol
But thanks for the link, I’ll give that a read. Though my ADHD medication was never adderall, however now all types are out since the adderall shortage caused everyone to switch... 🤦♀️
As I wrote above, I agree that we don't need draconian gatekeeping on prescription stimulants. Just pointing this out because of the community norm of not saying untrue things, even in defense of good points.
It might have sounded a little flippant due to our negative connotations, but I'm not sure what else you'd call it. The west also has vast amounts of functioning alcoholics/caffeine addicts, but if you'd prefer to say "adderall dependent" then go ahead.
The difference of course with prozac is that many (or most) people on adderall don't need it to function but instead use it as a performance enhancer (see finance, law, tech). There are many people who are unnecessarily prescribed Prozac, but I don't see what benefit they gain from it. On the other hand there are millions unnecessarily prescribed adderall who gain a lot from it.
I don't think think it is a good thing that it has happened in this manner, but the US has got itself into this mess and I'm not sure what the best way of going about getting (nearly) everybody off it would be. Prohibition has a lot of nasty side effects but one thing it is very good at is lowering general usage. It will be interesting to see if this temporary shock results in longer term usage reduction.
Incompetence is my only theory. Apparently the pharmaceutical companies applied to increase manufacturing to meet consumer demand and the DEA denied the request. Which in my opinion, the DEA should no longer exist and certainly shouldn’t be in control of American’s mental health.
Public service announcement: If you are having trouble getting your adderall script filled, ask the prescriber to switch you over to dextroamphetamine for a while. It's almost the same stuff and last I knew there was not a shortage.
I've had luck switching between XR and IR depending on what the pharmacy has in stock. Luckily either version works of me, though it may not for others. I also only get 30 day supplies at a time so switching between the two has less impact.
If they can't fill my script, i just ask the pharmacy what they have in stock and have my provider change it to match. I am lucky to have an NP that is willing to do this.
I’m in south Florida and every substitute is no longer available except for the norepinephrine ones like Phentermine and Strattera, which in my opinion are not substitutes at all. It’s so messed up.
You tried asking for dextroamphetamine? Brand name is Dexadrine. It's not used much. Some docs haven't even heard of it, others are fearful it will lead to addiction. In fact, though, it's very similar to adderall, and a component of adderall. Adderall is 50% dextroamphetamine, 50% plain amphetamine.
I called six pharmacies in the area and asked them to tell me every Adhd medication that they had that was available, that wasn’t back ordered. That wasn’t one anyone had. But maybe it could be ordered.
It’s very challenging because here in Florida every subscription has to be sent electronically, and can only be sent in every 30 days, so you’re left calling around to check availability then calling your doctor back to let them know where and what to submit, because they can’t search all around town for every patient.
It’s really out of control. For example, Walmart pharmacy said they haven’t had any ADHD stimulants available for the past 3 months, aside from Strattera.
A compounding pharmacy said they just aren’t even taking adhd prescriptions anymore because they take so long to fill that their phone lines are clogged up by patients calling and calling to check on the shortage...
Walgreens and CVS were doing better, getting shipments of methylphenidate in periodically, but now there is a long waiting list and no shipments for a month for anything time release.
Wow, that sounds maddening. You might call pharmacies and ask if dexadrine (brand name) or dextroamphetamine is available. It's used so rarely that if you ask about "any ADD med" they may literally not even think of the stuff; Also helps to be willing to buy in size pill, and cut it in pieces if dose too large, take several if too small.
I basically gave up and have been letting my life descend into disorganization. Taking fish oil etc. Trying my best, but it’s extremely not ideal, especially to just suddenly be cut off.
As a Muslim with a deep sense and reading of history and technology. I find critical thinking and logic as a way to distinguish truth from falsehood. This advice is repeated multiple times in our holy book.
Also another thing is that sometimes you can not decide on your own, because you are not an expert in every thing so asking experts critical questions and getting feedback is most important which most people don't bother to do. And read upon the relevant subject matter. Mass media and TV forget about any Truth in them. Read a lot, think critically, talk with experts get feedback, repeat the loop.
Please see my response to Carlos. This is not trolling, I am a Muslim, and this emphasis on critical thinking and logic is not at odds with my religion or it's foundation. Yes a group religious persons in Islam might say their is no place for critical thinking but more than 95 percent has the same views that critical thinking and reasoning is a foundation of Islam.
That's odd advice if it's stated so baldly in the Koran. Religions are not reasonable, they appeal to faculties other than reason. Using critical thinking and logic on the Koran would just cause you dismiss the whole thing.
I would differ and point this is a false assumption. If the Quran was so easily dismissed we would see 1000 arguments and reasons against it but their is none.
Also I would give an example so the chapter in Quran about people of the cave, a group of Young people who slept for 300 years in a cave away from persecution of the idolater govt, do you know what Quran quotes about their speech " why would not they i.e govt would give us a clear proof of the idols being Gods, we would worship the one who created heaven and the earth" . If the God who gave us Quran fears the logic and reason why would he tell us the foundation of a story is that their is no clear proof about a certain fact and people want to impose that on you. For the religious choice this is exact quote from Quran " there is no compulsion in religion" , and " your job o prophet is to just give the message, god is seeing the actions of his people".
It is trivial to dismiss the Quran, most of humanity does it. The atheists in particular really pick it apart. Now, I do think Muhammad was the last Prophet of God (though I do think the Quran is not perfect), but I realize this isn't a rational position.
"There is no compulsion in religion"? What about all the conquests and Muhammad killing poets that made fun of him?
Muslims conquered vast areas of the world, by and large other than maybe a few historical events where animosity or revenge would have involved they never compelled societies ( which itself would invoke punishment of God for those Muslims who do it ). An example is india and africa , jews and christians lived in muslim areas and were never compelled.
And to answer your objection on the verse on one hand side and the conquests on the other hand side, I would first tell the divine law which operates when a prophet is sent to a people, he gives them the message, the time and then God decides the result in the end, the Prophet Muhammad spent almost 20 years giving message of Quran to his people, he never compelled people to believe it in this period, it was always the message, he did not try to take a divine army the second day and come punish everyone the next day of the message to his people, 20 years he did what God commanded him to do and that was just give them the message, in the end God decided of the punishment which was in that case taking over the city of Makkah from the idol worshipper and banning them from ever entering in that city, The Prophet did not killed the conquered people or nor he punished them, in case of other prophets like Noah , Moses , Lot etc their people were destroyed as a punishment in the end.
Regarding the conquests and the reports of killing some poets, the Prophet let go of some who made fun of him and pardoned them, but the others who were killed, we have to remember their was a functioning society, their was animosity and revenge and open war so to protect the people of the state certain action were taken against people who were harming Muslims and these poets were doing that job these were no simple poets like today, they were sort of a power structure in Arabian society and had influenced for harming Muslims through insult of the Prophet and instigating , as a ruler of a state, sometime these actions were necessary so that people will not cause chaos and harm in society against followers of the Prophet. The Prophet pardoned the killer of his uncle, he pardoned the Maccan people , he never punished those who sometimes talked with him harshly and were rough, so we have to look this as a whole and then this would become clear.
Again regarding conquests, their was divine law for people of Arabia, they were being given the message, after its completion of the time, God willed to take the power away from them as a punishment, but no one in Arabia was compelled , those who fought against Muslims and lost lived their , some converted , others didn't and gave tax sometimes but not always. So it is a large picture we should not overlook. When Muslims took over the jerusalam, the christian lived their for centuries, even jews came and lived their. So during the life time of Prophet, their was a divine law which gave time to his people and told them the message and then came the results, and for us the remaining humanity, their is again the divine law , their is time for people, their is message and the results/consequence will be in the next life. Their is no compulsion, does not mean their will be no consequences in divine law, it means it is the choice of free will and their is time period to decide or reject. Sorry it was a long answer but the question require a large picture.
A Muslim coworker gifted a Koran to me. So far it’s been the most difficult holy book I’ve read. I’m referring to the required repeated praising of God. I have trouble getting past that.
The repeated praises are not only for ryhmic beauty and majestic experience but have wisdom and lesson in them. For example when the passage ends with God is wise and all knowing, and sometimes ends with God is all Knowing and is most wise. Scholars tried to understand why God changed the words often like that and what they found is the context of previous verse shows which one would be more appropriate and hence we find those praises accordingly so their is a huge amount of learning and beauty in those.
I grew up reading the Qur'an. The repeated praising of God was enjoyable, I looked forward to it. I say this as someone who otherwise has a strong intrinsic preference for content novelty.
Awhile ago Scott reviewed the book Origin Of Consciousness In The Breakdown Of The Bicameral Mind. I saw something recently on reddit, something called the "third man factor" which is when people in stressful, dangerous situations like car crashes often feel, hear, or even see a seeming presence of another person who guides them through the crisis.
Most famously apparently by Ernest Shackleton, when he was hiking across South Georgia island to be rescued along with 2 others at the very end of their disastrous expedition, later said that he often felt like there were 4 people there instead of 3. And didn't say anything about it to the others until later one both of them confided feeling the same thing.
And the reddit thread is filled with people saying "yeah, that happened to me, one time years ago I was in a car crash..."
Seems like it fits the Jaynes theory. I'm not sure it fits Scott's alternative version of "'consciousness' really means 'theory of mind'" because presumably the people involved in these situations, like Shackleton, have a modern "theory of mind" and usually no preexisting mental illness. But I don't know enough about it to say. It's also hard to say what "theory of mind" people have in those situations without having been in it. Do they think that they have "their" internal monologue and the "presence" as separate entities? Or has their own internal monologue been "shut off" temporarily while the presence is there?
The evidence for self-awareness (identifying with inner voice) before 40k years is not great. Maybe that date is closer to 12k years, depending on how seriously you take the Sapient Paradox. If our psychology is recent, then absolutely the primeval state could be exposed in times of stress.
I read the "Third Man Factor" after reading Scott's review of "origin of consciousness" and also Kaj Sotala's posts on LessWrong about the muliti-agent theory of mind (and Internal Family Systems therapy). I've read more on all these since, and my conclusion is that they're all discussing different aspects of how multiplicitous the complex systems we try to describe with the one word "mind" is.
This sounds like a benign, fleeting version of Dissociative Identity Disorder, which is hypothesized to be a defense mechanism against trauma, such that the victim can close themselves off to reality and make it feel as though this is happening to someone else. They might unconsciously externalize certain functions to the 'third man' who keeps a cool head even when you are panicking and overwhelmed, and once they're back to safety, they resume having an undivided identity again.
This has some parallels with PTSD: people who experience trauma do not always develop PTSD, they can either process the trauma and manage to adjust back to normality, or they can become stuck in that state of mind and unconsciously expect danger even in safety. There might be a similar mechanism for DID, that dissociating is a normal response to extreme stress that only creates DID under certain circumstances.
In the case of Shackleton, it would not be the least bit surprising. The number of life-or-death decisions he had to make correctly for the outcome that resulted is staggering.
Has anyone else experienced an identity crisis because of recent developments in AI? I spend a lot of my professional hours providing my perspective on business issues. I’ve always known there is nothing I say that some other smart person who spent a lot of time reading about the subject would say. But I was clearly contributing to overall GDP by learning and synthesizing that learning in the right setting. LLMs do a really great job of synthesizing information, and this is starting to impact my overall curiosity because I see less value in compounding knowledge in my head.
Unless you think there is zero understanding of psychology required for your work, you're probably good to go for quite a while.
Although, on the other hand, it's certainly true that a future AI might plagiarize what you've painstakingly assembled, either directly or as a mash-up, to answer someone's inquiry, so you might want to make common cause with the digital artists who are similarly horrified.
I mean no. I do something similar, and I just don't see a threat in the near future. So much of what I do in my consulting is handholding and therapy and emotional manipulation and just person-to-person forced accountability. None of which an AI is great at.
And AI will be great at getting a 95% accurate version of what the current rules/law/regulations are. But I am already decent at that, it might take the AI 3 minutes (you need to format a query), and me an hour.
But all the hard work and what I am paid for is actually getting people to do those things in the manuals/rules. My clients can typically already read those themselves if they want (they don't).
Plus I am providing my understanding and experience with the unwritten rules/behaviors/practices in the field, something the AI will have zero access to for quite a long time.
I expect demand for my type of work to go down very slightly, but also my work to get somewhat easier for me (which could lead to more competition from human competitors).
Now, now Fully Un -- you've been a good comment bot and while the GPT-based versions that replace you will be better, that can't take away any of your past glories. Your virtual presence here and at SSC will live forever as an important chapter in the development of artiicial minds. Well done, old soldier!
I have very bleak feelings about some parts of it: The idea of AI churning out lots of generic, cheesy story books for kids really depresses me. But then the idea of AI getting *good* in the way my favorite writers are good distresses me even more. Wonderful writers have been very important to me. Being able to write wonderfully well comes closer than anything else I can think of from capturing my sense of what it is to live an acute, meaningful life. When I think about AI being able to write that way, it's like the rug is pulled out from under me. I'm not exactly *against* it, but it makes me feel lost and despairing.
Can *you* write like your favorite writers? You're about a billion times more talented than any existing or plausibly forseeable AI, plus you have the benefit of intentionality. Can you just tell yourself "write like that!" with ever so many examples at hand -- and succeed? Could you do it even if you had years to devote to reading every word a given author had ever written? If not, why do you think an AI could?
The AI is not writing an entire story here, just assisting. My boss likes to use the general distinction between "auto pilot" and "power assist" and the thing in the link is still in the "power assist" category. It is, however, writing complete paragraphs and I don't find it unreasonable to see the length (and quality) of the passages grow over the next decade.
Why is that a worry? If you were a carpenter, would you worry about the advent of cheap power saws? Or would you say "yippee! I can get the important work done a lotfaster without having to allocate X hours of drudgery hand sawing?"
When AIs write a sequel to "Pride and Prejudice" that is not only like the original in flavor and tone, but also creative and interesting, as if Austen came back from the dead, then I would ponder things. As it is, not even gifted human beings can pull off that particular trick. So my expectation that a human being can write a computer program to do it is close to zero.
I think we (you :-)) might be losing track of the actors here.
Eremolalas seemed worried about AI writing story books for kids.
You seemed to think that AI generated stories was a long way in the future (unless I mis-understood your response to Eremolalas).
I chimed in that AI is already being used to write fiction, so it may not be all that long before Eremolalas' fears are realized.
And you have just replied (to me) suggesting that this shouldn't be a worry.
*I* am not worried. Eremolalas is. And I don't think he/she/it will feel less lost and despairing with the power-tool analogy (though I think that IS a reasonable analogy).
My read is that Eremolalas was worried about AI writing *literature*. Good stories, stories that amaze and astonish. I don't think he or she is worried about AIs writing schlock, student C+ essays, corporate PR releases.
AI is just math, which is this very mysterious thing that has lots to do with the truth. It's not exactly awful that something as pure as math is potentially capable of producing deep art.
I still think it can't take over that though, because it's missing the communication aspect of real art.
"I have very bleak feelings about some parts of it: The idea of AI churning out lots of generic, cheesy story books for kids really depresses me."
In the book "The Diamond Age" one of the background details to the story is an Artificial intelligence (called pseudo-intelligence in the book) creating children's stories tailored to the young protagonist. Once we have a machine/program/bot doing this then there is no reason for the stories to be particularly generic (though they will very likely have a lot of tropes ... which is pretty common even for human hand crafted children's stories).
No, I spent some time with GPT on stuff that I have a professional background in, and found that even if _gently_ probed it will demonstrate it has no clue what it's talking about. It's better than a magic 8 ball since its word salad might point in the right direction for a google search if you know absolutely nothing on a topic, but it is no substitute for having deep domain knowledge that is applied to a specific situation. Of course, I'm of the opinion that LLMs will show massively diminishing returns in subsequent iterations because they don't do anything that can actually be called "understanding", though I've read some arguments to the contrary (we will see!).
While I understand your concern, I don't think it's necessarily right to imagine LLMs as replacing your utility.
If you want something practical, start practicing how to integrate e.g. ChatGPT into your workflow. Being someone who's good at writing prompts and understands when to use LLMs (including what they're good at, what they're weaker at, and what value you can add) is part of what it means to be good at learning and synthesizing. Your coworkers who haven't realized this will likely be less valuable in the future, but you can be on that cutting edge.
Now you know the cold chill those West Virginia coal miners felt. I wonder, as an upper-middle class highly educated professional, do your views on rural people resemble the rest of your cohort in society?
I think it's good to be empathetic with other cohorts of society, even if I can never truly feel how they feel. But I am certain that I would never ask the government to enact laws against automation or boycott progress. As much as I'm anxious about the future, I welcome it with open arms.
I see a lot of my work value as coming from synthesis too, and I'm scared of LLMs rendering me irrelevant to decision-makers. At the same time, playing around with LLMs has actually increased my curiosity; I can't stop asking them for 30-second summaries of various topics for instance, knowing full well they spout occasional factual inaccuracies etc.
I bet this is not uncommon among people who are paying attention to this stuff. I'm in a pretty analogous situation, and I have a couple of thoughts on this.
One, I believe pretty strongly in the value of what I call "mental furniture." Although I love looking stuff up on the internet, I've realized that it's no substitute for having enough facts and insights stored in my brain to be able to interpret what I'm reading. I've found that facts as basic as the population and median income of the US or life expectancy by income give me an ability to interpret new related information much more quickly and accurately than someone who doesn't know anything. So although now all the facts are available out there, there's still a ton of value in knowing stuff. Arguably even more so because it allows you to know what to look for in external sources.
Two, LLM responses to mediocre prompts are pretty mediocre. I tend to snicker at prompt engineering per se, but the broader idea of how to frame a question and how to conduct a dialogue to get to a good answer is incredibly valuable. I'm sure you've found specifying the question to be very helpful in your work already, and now it's helpful working with an LLM as well. Another good way to think about this is that now you're a manager. You have a team of interns working for you, and managing them with good research requests will determine how productive they are.
So, there can be a feeling of dislocation, sure, but I think you'll find once you get used to it that your skills and curiosity are at least as useful as before.
Looking over the question again, I would add that you may still need to revisit how you approach your job. If the perspectives you offer are relatively straightforward, I don't think you should give up on curiosity, but you may want to start heading for higher ground proactively from a career perspective.
"Has anyone else experienced an identity crisis because of recent developments in AI? I spend a lot of my professional hours providing my perspective on business issues. I’ve always known there is nothing I say that some other smart person who spent a lot of time reading about the subject would say."
I have not (yet), but I've read a comment on another site I frequent that could be accurately identified as a post from someone having an identity crisis. So you aren't alone.
"LLMs do a really great job of synthesizing information, and this is starting to impact my overall curiosity because I see less value in compounding knowledge in my head. "
I wouldn't give up yet. The history of AII is that the first X% seems pretty easy, "everyone" extrapolates and then it slowly becomes clear that the progress slope gets VERY STEEP VERY FAST. ChatGPT is wrong often enough to be of concern. I'm not convinced that just making larger models will fix this problem.
Have you tried Bing AI? I have been testing the beta release and it seems more reliable than ChatGPT. It provides real and accurate URLs to resources that it mentions for example.
It has also managed to answer obscure questions about my local neighbourhood that Google totally failed to provide a useful answer for.
I'm surprised not to see much commentary on the use of large language models/generative AI to make probabilistic forecasts, whether for contests like Scott's (and my own) or for prediction markets. In my own analysis I found a 0.68 correlation between Bing/Sidney's forecasts and the ensemble forecast of my historically most accurate forecasters. Is anyone else doing this sort of work? I'd love to read about it.
Also, if you haven't done this yet, I highly recommend you feed the text of your 50-question contest—or at least the subset of props not yet resolved—straight into Bing/Sydney and compare the forecasts with some ensemble (or the straight group average). I think it will correlate above 0.5.
Looking forward to it. It's amazing to think about how much the "ForecastTech" world has improved since Zou et al. in June 2022 to my simple little at-home experiment in Feb 2023!
In this case, I did it in the simplest possible way: by pasting the text of the prop into the Bing chat window (with a little preamble). The results that came back were decent-looking in the aggregate but kind of wild in the specific, so I went on to suggest another set of prompts that could equip a human forecaster to make a decent hybrid forecast: https://braff.co/advice/f/driving-miss-sydney
Thanks for sharing. I like the analogy to autonomous driving levels. It also occurs to me that it's a bit like managing an employee depending on their level of task maturity.
Regarding the actual steps you took, I wonder if you could productively reverse them a bit and have Bing come up with the wild estimate first and then feed in the various intermediate prompts as ways for it to critique and revise its initial estimate, with the goal of getting a level 3/4 answer.
Hypothesis-driven problem-solving seems to work better with rationalists than with Sydney. She gets quite attached to the first thing she says, and it's hard to get her off of it, though if you change a parameter (like the time to resolution) she does amend the answer.
Anecdotal: I personally can't tolerate either LSD or MDMA because of physical side effects (muscle spasms, tremors, jaw clenching, fever, vomiting), which are similar to but less severe than the symptoms I experienced in serotonin syndrome from an antidepressant. It's hard to enjoy (or identify) any social benefits when you're writhing feverishly in bed or shaking and vomiting on the bathroom floor.
But since neuroscience research says autistic people tend to have *low* serotonin levels, my unusual sensitivity to serotonergic drugs suggests that I may not be representative, and also maybe that my autism diagnosis is wrong?
With LSD, I actually tried microdosing first because I was worried about my reaction. I'm definitely not hypersensitive to the psychological effects.
At 25mcg, the physical effects were pretty tolerable - mostly just jaw-clenching - but the mental effects were barely noticeable. I thought I was better at seeing patterns/pictures in noise, and I seemed to be a bit more sensitive to the illusion of movement that you get when you stare at something for a while, but definitely nothing mindblowing.
50mcg was the same thing but stronger, plus a mild euphoria and some muscle tension and tremors.
Then 100mcg was the full-on painful spasms, high fever, GI symptoms, etc. There were definitely some interesting mental effects, but still nothing crazy. No deep meanings or universal truths to be found. I'd mostly describe it as enhanced pattern-recognition and an altered sense of time.
I haven't tried lower-dose MDMA because the full dose was just such a uniformly horrible experience that I have no interest in repeating any part of it.
I have been considering trying psilocybin, though, to see if maybe it hits slightly differently than LSD.
The reported differences I've seen between psilocybin and LSD are that LSD is more cerebral, intellectual, while psilocybin is more "embodied", as in, you feel it more in your body than LSD.
That's the first time I've heard a terrible experience with MDMA. Did you test for purity?
It's so weird. MDMA puts thousands of people in the hospital every year and kills a few, but people are less willing to believe that you had a bad reaction to it than they are to believe you had a bad reaction to weed or even caffeine.
MDMA hospitalizations and deaths are pretty much all due to contaminated drugs or dehydration. That's not what you're describing. I'm not saying it's outside the realm of possibility, but it's certainly the first time I'm hearing of such a case.
I would think intolerance to the bodily effects of lsd and mdma is more evidence for an autism diagnosis because autistic people are known to struggle with interception and hypersensitivity.
Maybe in general. In my specific case, I'm not talking about subjective hypersensitivity to harmless interoceptive signals; I'm talking about being hospitalized for a rare life-threatening drug reaction to an SSRI (https://www.mayoclinic.org/diseases-conditions/serotonin-syndrome/symptoms-causes/syc-20354758), and also having similar but milder physical reactions to other serotonergic drugs.
I have sometimes suspected I have autism or asberger's, and acid trips have made me feel like I'm cured from it. These days I don't feel as autistic, and this wasn't the result of any one trip, but the psychedelics I tried seem relevant to it.
If you're worried about trying a psychedelic because you're autistic, well, you should be worried because psychedelics are inherently risky. The last time I did LSD I went psychotic, and later found out that the psychosis stuck since I went psychotic again when I went off meds (on the reasonable theory that I would be fine if I didn't do any drugs again. I didn't, and I still lost it).
But, those psychoses were critical parts of my spiritual development. You can read about that here:
It's not like it's that common to go psychotic from doing psychedelics, and it didn't happen to me from a single trip. You will probably be fine if you do it once a year, for example.
On your second question, my first thought was "Why would it have anything to do with the gut microbiome?" but my second thought was that it's sufficiently plugged into general metabolism that it would actually be surprising if it did NOT affect the microbiome. So yes, excellent question, never would have thought to ask it.
A quick search turns up one single experimental study on LSD and the gut microbiome, a study in mice: https://bpspubs.onlinelibrary.wiley.com/doi/abs/10.1111/bph.15977. They gave mice LSD for seven days and compared their gut microbiomes with controls, and found that the Shannon alpha diversity decreased (whatever that means) and altered the ratio of a bunch of bacterial species.
There is at present TOTAL CRISIS of authority in the media and intellectual world. I ask ACX readers to state what sources of informatiom they actually TRUST, sources that can include opinion and speculation.
Nobody (not myself, at least, that's what this comes down to) will scourge or harangue you for trusting these sources, or at least that is the intention of this.
I trust the following persons/sources to tell the truth:
Scott Alexander is trusted implicitly, mostly because he "shows his work" and is fundamentally committed to the question of "trust". I think that is why we all are here, to be honest.
I have recently talked to co-workers, friends, who believe that Three Mile Island was an epic nuclear disaster, and that Atlantic hurricane data prove that storms are getting more severe and intense b/c climate change. Both those things are not true, and repitition of them are lies. In these two small things, I see people I thought to be rational, refusing to listen to or consider any evidence. My questions about Three Mile Island were met with, "Where did you hear that, ON THE INTERNET?"
Recent news like the Washington University (STL) whistleblower, who published a long essay about what she saw as huge ethical lapses at the clinic for "transgender care", can easily be dismissed because the "news" was only published on "right-wing" sources like the Epoch Times. But this creates a tautology, where partisan organizations deny a voice to stories they ideologically diagree with.
I am anomic, afraid, without constants or norms, and unable to trust anything. What is any way out of this?
>I have recently talked to co-workers, friends, who believe that Three Mile Island was an epic nuclear disaster, and that Atlantic hurricane data prove that storms are getting more severe and intense b/c climate change. Both those things are not true, and repitition of them are lies. ...
My example for this is a brilliant retired professor friend (who is also a lifetime NYT reader) who seems legit convinced climate change induced extreme weather events will be killing significant percentages of the US population soon. Like there will be one super serious tornado, and he will be convinced that there is the super concerning exponential trend. And I will be like "climate doesn't work that way, while there might be changes they have so far been very gradual and project to be gradual", and he will bring out some NYT factoid about how tornados jumped from 976 to 1429 between 2016 and 2017 and OMG by 2040 there will be 8 million tornados.
And then I point out there were 1,126 in 2018, and 1,075 in 2020, and he just seems to think that I am just in denial. The NYT has taught him that we are all going to die, and goddamnit he isn't going to let facts change his view about that anymore.
As for who I "trust"? Scott? Maybe no one...
IDK, I was always a highly skeptical person (was giving the local bishop the inquisition on the problems with Catholicism and Christianity at age ~11), so my trust level was always very low (shitty childhood).
Might be easier to do a time series.
Trust for: (soft science papers, hard science papers, NYT, NPR, FoxNews, random bloggers):
"Air-Horn" is my term for these subjects. Things like computer security, or encryption, immediately run into somebody blowing the AIR-HORN of "child pornography".
Blowing the AIR-HORN obliterates all thought or reason, there is an EMERGENCY, can't you hear the AIR-HORN?
If the other forms of power generation were as safe (in practice!) as nuclear has been, we'd be hearing all about how wonderful it is from a safety prospective.
Exactly this. For many people, millions, including my friend, the idea that "Three Mile Island" was "terrible" is embedded in their minds at a deep level, and not even handing them a printed copy of the "REPORT to the US SENATE on the TMI Accident" could dislodge that.
We are also at a crossroads where even printed documentation is suspect. My friend's dismissal was virulent: "You read something ON THE INTERNET?" -- I can't convey adequately the level of contempt that he (a professional actor, and voice-over artist, and very, very good) charged that with. And now we see AI's able to generate documents that seem exactly like the most persuasive and certified reports of our society.
If a document printed on paper in 1980, complete with all the artifacts like pop fonts, tables and charts made with a typewriter, and with physical signatures and stamps from the National Archives, retrieved from a US Govt source, if you are not even willing to look at that, what defense have we against the million-lie astroturfing of tomorrow?
Oh, nobody will even entertain or look at any thing that might gainsay their previously ordained certainty. It is sad, many will not risk shattering the comforting worldview -- a worldview and action-sphere that distills down to doing exactly what their (left-wing, Berkely/Cambridge Green Party granola-cruncher) mommy tells them.
Generally speaking, my heuristics are vaguely like :
1- Claims with no obvious emotional (political, religious, etc...) or profit antecedents or consequences are trustworthy.
2- All public statements a politician makes are carefully-crafted and hyper-optimized to do one thing and one thing only : make them look good and increase their chances of success in the political system they are operating in.
3- All public statements a corporation and\or its representatives make are carefully-crafted and hyper-optimized to do one thing and one thing only : make it look good and increase their chances of profiting in the market they are operating in.
4- Everything happening in a place you don't understand the language of might as well be fiction, trust no translation. Your degree of distrust should vary with how concrete and close-to-the-facts the claimed event is : A car crash needs no words to convey, but a complex political drama is ripe for (deliberate) mistranslation and missing context.
5- Generalities are less trustworthy than specifics even if supported with specifics, much less so if not supported with specifics.
6- "Experts" are vastly overrated the vast majority of time. To a very good first approximation, an Expert is a physicist or an engineer or a mathematician or a computer scientists or a historian talking only about the subfield they got a phd in.
7- Imagine that the writer/speaker of every critical piece of media is your sworn enemy and that its target is you, how would that affect your interpretation of it ? What is an alternative phrasing that fits all the facts yet doesn't paint you as bad ? If the piece of media is praising instead of critical, imagine that you hate the thing/person it praises with the passion of a thousand sun, and the writer/speaker of the praising as a hired mouth that you saw pocketing money with your very eyes.
8- As much as it's fun and fair to taunt people you hate by repeating unflattering falsehoods about them, your mind is a sacred place that you should not let falsehoods creep into, contemplating unflattering-to-you truths or enemy-flattering truths privately will not boost the enemy, and will increase your understanding of both you and your enemy.
9- On the subject of enemies, while everything they say must be necessarily discounted by the fact that they will lie and mutilate truth as necessary to paint the something or the someone they hate in bad light, give special private consideration to their criticism if you care about its truth value. Enemies are good at finding inconsistencies and wrinkles in the things they are enemies of, and naturally incentivized for such. The inverse also holds for friends and admirers.
10- Vast majority of news is useless non-actionable negative entertainment. Information exist mainly to fulfill some utility : to base actions on (action optimization), to base views on (value optimization), to increase your intelligence or your mental toolkits of models and ways of thinking (performance optimization), or simply as idle curiosity (exploration), building a model of the environment simply to keep your brain sharp.
The vast vast majority of news fulfill none of these things : you can't base any actions on it because it's simply none of your business ("Putin Invaded Ukraine"), you can't base values on it because it's either too easy or too hard to infer what value you should take out of the whole situation so it adds no useful values (1-"Putin Invaded Ukraine ! Ohh gee, I guess killing people is bad" ; 2-"An actress I don't know claims her boss harrassed her ! Ohh gee, I guess I will just believe this random woman with obvious incentives to lie in a profession notorious for promiscuity and take her words over the random guy that I also don't know and haven't even heard his defense"), they don't increase your intelligence or point you towards any new mental models since they are negative-emotion-heavy, yet forgettable, thrillers with most of the facts bungled, blinded, and reworked to get maximum oomph, and they are not even good topics for exploration, as they mostly induce negative emotions when they're not boring. I can find no worse works of semi-fiction.
> 6- "Experts" are vastly overrated the vast majority of time.
Doubly so if they're being interviewed in some kind of mainstream venue/channel/publication, because then they were likely selected to convey legitimacy to a specific viewpoint. It's very, very uncommon for a mainstream source to feature two experts that disagree and provide multiple perspectives.
> Enemies are good at finding inconsistencies and wrinkles in the things they are enemies of, and naturally incentivized for such. The inverse also holds for friends and admirers.
This has actually been studied a few times under "my side bias". Totally a thing.
I agree, we're going through an awkward period when the credibility of corporate media is crashing, yet there's nothing to replace it yet as a standard of credibility.
Jesse Singal and Katie Herzog of the Blocked and Reported podcast. But it has a fairly narrow focus (mainly culture war stuff) so may not be what you are after. I respect their commitment to being able to back up what they say and to correct what they get wrong. What is most worrying to me is that some of the people who attack them are perfectly respectable intellectuals who just seem to have lost their moorings.
Russell, I agree I find them relatively trustworthy for journalists. Also I promised you a podcast comment, and I have a draft of one in my email, but it keeps growing and growing because I have listened to about 30 of them now.
So the TLDR of the as yet unsent comment is I am really enjoying the podcast and think your choices of topics/guests are interesting, your voice pleasant, and interview style maybe not an "A", but a solid "B" that isn't taking anything away from the podcast. You let the guests have a lot of rope to speak, which is great. The audio quality is good enough to not be noticeable one way or the other (which is all I am looking for there).
I have a long flight coming up, so I will try and send you a proper review during that.
TL;DR up front: paid-for industry sources are the best for obvious reasons. In the UK, the most consistently-good free source is the BBC (written news, not TV/Radio)
Explanation:
From ~2012-2016 I worked in various editorial roles for a large legal publishing firm. For a good part of that period I was on the 'news analysis' team for our clients (lawyers & those in that space). My job was to wake up very early and read every broadsheet newspaper as fast as I could, along with a feed of articles from AP, Reuters, BBC, Bloomberg and a few others. My team and I then selected articles we thought would be of interest to our clients and wrote summaries of them. This meant that for several years I effectively read the vast majority of the "serious" news output in the UK every single day. There were lots of interesting things about this but the one that's relevant here is that I would read *the same story* reported many times in different papers, and so built up what I think is a fairly good view of the biases & failings of most of these publications. Because I was also reading a lot of industry publications that followed specific stories in more detail than the general prcess, and very often writing up the relevant court reports, I also developed a decent view of which publications tended to make mistakes (or tell lies if you're uncharitable) which were not later corrected.
I think it goes without saying that there is no source on which you can rely to be 100% *correct* - but that's not what you're looking for; you're looking for someone to be *honest*. The problem is that as Scott himself recently argued, the MSM is rarely in the business of telling outright lies; instead they tell *technically the truth* in very misleading ways. In addition to applying all the usual martial arts of rationality, it also helps a lot to be aware of the existing biases of each publication.
Unsurprisingly the best and most honest reporting I saw was the most expensive stuff - the reports written by specialist reporters for Montel, Bloomberg, Reuters etc and for which industry clients (particularly traders) paid a lot of money to access in realtime. A surprising amount of this never makes it into the mainstream press; not because of any conspiracy to suppress the truth afaik but rather because mainstream press (accurately, I assume) thinks no-one will care about it.
For a free, generalist source, the best *by far* in terms of honest reporting in the UK I found to be the BBC news website; a lot was covered and was generally well-handled. The downside to this (and the way this was achieved) is that the BBC tended not to report anything it was not very sure to be true and that it had corroborated independently. This meant that they were often slower to report than other outlets, and reported less detail (particularly when it came to personal or prurient details). Their main bias was towards *authority*, which makes sense as they seem themselves as one. There were also individual topcis on which individual editors and staff at the BBC had strong personal opinions which they failed to conceal (example: during the period mentioned above they were extremely pro assisted dying and would report in great and sympathetic detail on a vast number of cases which were completely ignored by the rest of the media; almost all these articles came from a single journalist. I do not know if this is still the case as I no longer read the news as thoroughly). Most of the time, however, they take fewer shots and get more of them right.
If you can find free Reuters or Bloomberg coverage on the subject you care about it will normally be mostly correct. The downside here is that the best stuff is not free, and relatively few subjects are covered.
You should strongly avoid almost any broadsheet, even 'reputable' ones such as the FT or the Times. They are right just enough to trick you into believing Gell-Mann amnesia shouldn't apply to them, but it still does.
I realise that none of the above is a staggering, new insight. Sometimes the truth is obvious!
> In the UK, the most consistently-good free source is the BBC
The big problem with BBC news, both TV and online, is that it is highly selective. Anything contrary to the liberal world view they wish to project is omitted, or downplayed by being hidden away in a subsection of the website, or left prominent for a shorter time before being whisked into obscurity.
Of course, someone liberal themselves might not notice this much, or may even welcome it, but in relying solely on BBC news they would be in something of a filter bubble all the same.
The BBC have a strong pro-immigration and pro-diversity bias, and will do their utmost to ignore or suppress anything which puts immigrants or minorities in a bad light, or suggests that a continuing liberal immigration policy is not universally popular.
For example, I don't think they reported at all on the large anti-immigrant demos in Dublin and elsewhere in Europe a couple of weeks ago. I also very much doubt they have ever reported, nor ever will if they have their way, on the adverse impact of mass immigration into Sweden for example.
Also, they are clearly biased in favour of Labour (roughly equivalent to US Democrat). If a Tory ("Republican") politician is involved in a scandal, the BBC headline will typically read "Tory politician caught in bed with underage boy" (or whatever the misdeed was), whereas the equivalent Labour headline would read "Politician caught .." and only tucked away in the article, if anywhere, will their party affiliation be mentioned!
> Of course, someone liberal themselves might not notice this much, or may even welcome it, but in relying solely on BBC news they would be in something of a filter bubble all the same.
of course, that's a fully general argument. Right wingers will not notice the bias in right wing oulets, etc, etc.
Since the Financial Times was sold to Nikkei in 2015, they seem me to have moved to occupy the position you describe the BBC occupying at that time, which in contrast seems to have become more of a mouthpiece of the UK government. The BBC's notions of authority also seem to have become somewhat tainted by tribalism. Unfortunately the FT is neither free nor covers as wide a range of topics as the BBC, so it's closer to Reuters/Bloomberg than a general source.
If you treat this as a binary question of who to trust or not trust, you've already failed, as you are just putting yourself in a different epistemic bubble. In particular you should be way of anyone who's main way of marketing themselves is to say the "mainstream media" are bad and untrustworthy, and therefore you should trust them uncritically.
The boring answer is that all media has some degree of unreliability, whether due to incompetence or deliberate bias. In general the larger more mainstream sources tend to be more reliable, taken on net, because they have more to lose from being wrong. Though what facts one chooses to highlight is itself a conscious choice that can inject bias.
>In general the larger more mainstream sources tend to be more reliable, taken on net, because they have more to lose from being wrong.
This only holds if there are independent authorities with enough clout to be able to prove them wrong. Which does not apply to fundamental culture war narratives, as an important example. Sadly, there's no good reason to expect anybody else to be much more correct on net on those matters either.
re: Randomstringofcharacters's comment (in case Substack borks the reply function again) -- bravo to the first part of this. Binary trust/don't-trust completely misses the point of what interacting with information can be like for the open-minded.
However, I'd say that the question of reliability is separate (as in Scott's posts about the media not actually lying much) from the question of 'being wrong'. The media is likely a tremendous net negative in terms of helping people correctly model reality, and I'd call that deeply unreliable. They're not often wrong on the facts, though.
" The media is likely a tremendous net negative in terms of helping people correctly model reality, and I'd call that deeply unreliable. They're not often wrong on the facts, though."
^This.
The operative principle of the media is to push a particular preferred narrative. Sometimes this is done explicitly, like at the NYT or NPR, but can be seen when J-schools talk about "changing the world" or "providing the information necessary to make the right decisions/Democracy(tm)." The object is to influence what people believe and then presumably how they act. To do this it's unnecessary to fabricate things, one can simply curate which facts will be presented and how.
It might actually be. If you identify the left as more optimistic about human schemes and efforts, and the (traditional, conservative) right as more pessimistic, and also account for the fact that the center probably trends at least somewhat more optimistic than reality supports, which seems like plain old human nature (cf. the oft-told tale that the majority of us consider ourselves above-average drivers, most retirees overestimate how long they'll live and be in good health, business plans rarely come in under budget and ahead of schedule, et cetera) -- then any steady stream of factual reporting on the results of human efforts and plans will be a little surprisingly disappointing to the center, and only match expectations of those a bit more pessimistic than the center.
I'm not sure that I trust anyone to be 100% accurate 100% of the time. In general I rate what people tell me based on a number of criteria:
- does this person have an incentive to say this, regardless of its truth -> downgrade
- does this person show their work, cite sources, etc. -> upgrade
- does this person show evidence of training in the field, including the ability to communicate in an appropriate register -> upgrade
- is this information plausible, based on other information
- can I test it? what happens if I do?
- does it generate testable predictions? Are people claiming to have tested them? what results do they report?
- does it matter? If you tell me you had spaghetti for lunch, I'm not going to spend time thinking about whether you are telling the truth.
- has this person been reliable in the past? if so, was it in a relevant area? (The person who gives me the best recipes for ingredients I don't know how to cook with gets a big upgrade for recipes; almost none for details of non-culinary history)
You get the picture. Except in practice I don't bother unless there's something new involved. The replication crisis caused me to revise a lot of my prior beliefs towards greater uncertainty. So did discovering that in the 30+ years since I'd been in college, biology had advanced in ways that threw much of what I'd been taught into question. So had paleontology. I treated both as pretty close to "relearn from scratch". OTOH, I rejected several new claims about history, feeling that elderly eye witnesses I'd known in my childhood were more reliable than seemingly politically motivated claims from people too young to have ever met any eyewitnesses.
Greenwald, himself, broke the Edward Snowden story, and also the reporting on the Hunter Biden laptop. In both cases, he was stiffarmed by traditional media. His whistleblower bona-fides are pretty solid, to my thinking.
Seymour Hersh broke the My Lai massacre and the bombing of Cambodia, but he's spent the past decade or so peddling nonsense. It is a failure of logic to assume that because a person was conspicuously honest and diligent once, they will always be honest and diligent.
With investigative journalists, there is a failure mode where their early high-profile stories put them in a place where they will always and only have an audience for stories catering to one side of the political divide, because one side sees them as a hero and the other as a traitor or a fraud. Too often - not always, but often - they fall to the temptation to turn whatever bits of fact come their way into the stories their audience wants, without due diligence. Because their audience will believe them anyway, and the other side never will no matter how diligent they are. Greenwald hasn't gone as far down that path as Hersh, but there are enough warning signs that I do not extend him a high level of trust.
This comment and your original one sound to me like you're trying to stick to too much of a black and white model of trust. Everyone is wrong all the time, even ourselves. So I think a trust/no trust model doesn't work very well. Like Scott has written about, when Fox News or The New York Times (pick whichever one you trust the least) writes that there was an earthquake, you can pretty much trust that there was an earthquake. After that, it gets harder. Not only does a binary model not work well, it often ends up leading to a liars/truth tellers, in group/out group dynamic. People you know who are wrong about things, even denying evidence to the contrary, are probably not liars as such, they are just misinformed and sticking to their guns. Everyone is prone to it.
The Hunter Biden story was repeated ad nauseum all over conservative media. Fox didn't shut up about it for a year. In what sense was he stiffarmed? Even mainstream center left papers like the NYT were reporting on it from fairly early on, in the context of "this is something that is being alleged"
The mainstream was reporting that 50 CIA agents signed a letter stating the Hunter Biden's laptop story bears all the earmarks of a Russian misinformation campaign.
I also need to mention that Greenwald was "stiffarmed" by The Intercept, the newspaper/publisher he FOUNDED, who refused to publish the Hunter Biden story at the time.
I know this seems like I am GG himself, but the facts here support his credentials as a journalist reporting truth that corporate media have decided to suppress.
IIRC, even Fox News - who a few months later would be pumping up various conspiracy theories about the 2020 election that their own anchors were privately saying were totally bonkers - declined to break the Hunter Biden story. It simply wasn't well corroborated.
It's not like that sort of thing is unprecedented. The infamous "Steele dossier" wasn't reported on by any major outlet until after the 2016 election, and when it was, a bunch of media outlets came out and said some version of "we were shopped this story but declined to run it because we couldn't confirm its reliability". It was Buzzfeed that actually published its contents.
Were the Snowden leaks "well corroborated"? No seriously major political scandal I can remember was ever well corroborated from the outset. Publishing the story is what opens the floodgates to further leaks and corroboration.
We disagree. The Hunter Biden story was suppressed, censored, memory-holed, deliberately hidden, exactly at the time of the US Presidential election. Any subsequent reporting on that story happened after the crucial time of Operational Effect. All US media conspired to dismiss and hide the story, and to actively discredit and slander any source that reported it.
This is exactly what I am talking about re: Authority, crisis therof
Given that the train derailment was not actually dangerous, and anybody who believes that, learned their chemistry from Spider-Man, or Jane Fonda, yes, the headline was disengenuous. I am not saying Greenwald is an Avatar of Truth, I guess I am saying I don't immediately assume he is lying.
I haven't looked into the chemistry of what was on that train.
What I do think, though, is that if the response to a train derailment in Ohio needs the intervention of the President of the United States then the USA needs to be rethinking its approach to federalism. The US has eighteen million government employees, it should be capable of dealing with stuff like this without the need for the old dude at the fancy desk.
It's funny because I don't trust Glenn Greenwald to tell the truth at all, he strikes me as very dishonest. If you ask me for examples I can't give you a definitive list or anything because I don't keep track, nor do I regularly follow him. Oftentimes the dishonesty strikes me as "technically true but highly misleading", other times outright false, and sometimes (I'm not sure if this counts as dishonesty or not) he's just really inconsistent on certain questions where he acts like he has really solid views (I know that's very vague but not sure how else to describe it).
I don't know anything about Campbell, Scott I trust to be honest but not necessarily to be correct, and the Washington U whistleblower article I read and don't dismiss as right-wing whatever.
At the 12:47 mark he plays a clip from an interview of Yoel Roth, the former head of trust and safety at Twitter. His commentary on the clip starts at 14:24. He claims, with no evidence, that the threats made against Roth were merely criticisms of Roth's work at Twitter. He claims that Roth views himself as a victim of “people, the sort of unwashed masses, the ones who have no power or position relative to [Roth],” even though Roth doesn't describe himself as a victim and the only people he mentions are Kelly Ann Conway and Donald Trump.
Greenwald says that, “I think probably the intonations of how [Roth] speaks in that clip are one of the most important things.” He introduces the clip by saying “at the risk of nauseating the audience....”
Greenwald is really an enigma to me, but whatever is going on with him, he's not acting like a trustworthy journalist.
Thanks for calling out this clip and describing it in an interesting-enough way that I actually tapped it and jumped to 13+ minutes. It's odd, though -- I'd heard of Yoel Roth whenever the twitter stuff was happening, but hadn't bothered to watch any of it. Even the 20 seconds of the clip I could stand filled me with such a revulsion for this cringing, supercilicious creature that although I still don't know who Gleen Greenwald is I now like him twice as much as I did five minutes ago. I'm a bit shocked, insofar as generally I regard journalists as the very lowest category of human, and siding with one makes me uncomfortable.
> Oftentimes the dishonesty strikes me as "technically true but highly misleading", other times outright false
So like pretty much every other journalist then? The only questions then are 1) how often their take is wrong or how misleading, and 2) is their perspective one that is being suppressed or isn't being discussed in the mainstream? If so, then that by itself has value.
Maybe someone in the comment section who will fess up to liking Nassim Nicholas Taleb can chime in on why he calls Greenwald tremendously dishonest (like Snowden).
I don't know what Taleb's beef with Greenwald is, but he has a history of first praising people to high heaven and then, when they fail to live up to his somewhat arbitrary expectations, condemning them as worthless scoundrels. Something like that may be in play here.
Greenwald is in the category of internet contrarian who treats uncritically repeating the positions of people like Putin as being a demonstration of independence because its in opposition to the American mainstream. A stopped clock is sometimes right, but reversed stupidity is not intelligence
Calling him a contrarian implies that his opinions don't derive from a consistent set of values. That's not been my experience when I've seen his stories. He seems staunchly anti-imperialist, anti-war, and anti-authoritarian. The US is all pro-all of those things, so of course he would find himself regularly opposing US policies.
I suspect any pro-Putin sentiment you're inferring is you swallowing the mainstream false dichotomy that disputing US actions is necessarily indication of support for Putin.
Interesting. Greenwald seems honest to me, at least I am willing to view him as having no overriding ideological bias or being the tool of a govenment.
Of course he may be mistaken, or too small to appreciate the larger game that is being conducted around him. If applying David Foster Wallace's criteria of the "intellectual appeal" vs "emotional appeal" vs "technical (ie expert) appeal", he is at least capable of not insulting or blatantly lying to me, a combination of 1 and 3 that achieves the purposes of (2)
Greenwald suffers, IMHO, from being rather alone in his journalistic quest (which makes me trust him rather more, actually, supra 2), and I have to be blunt, where some have a "face for radio", GG has a "Voice for print" -- his on-air persona is unfortunately whiny and not-compelling, to say it mildly.
Yet I trust him. But trust is an incredibly low bar now, as I hope I have intimated above.
To inject a bit of levity, Linda Cohn, longtime ESPN reporter who was in line to become the first female anchor on "SportsCenter" -- at the time a huge and visible mega-media position -- was ordered by ESPN brass to take six months' of speech therapy because .... well, her intense Long Island accent simply wouldn't fly. She agreed because, well, yeah, Lin-DER! :-)
I think he does have an ideological bias (almost everyone does), it's just hard to describe exactly because it doesn't fit in exactly with either the left or the right, and frankly because I think he's not super ideologically consistent, more driven by hair trigger opposition to the people he doesn't like (these days usually liberals).
TBH I don't even give him this much credit. Main reason is all the times that he says things where he's clearly trying to be technically true while also being clearly misleading. Couple of examples come to mind (these are all from memory, I don't have the links):
- claimed he never supported the war in Iraq, called someone a liar for saying he did. When pressed (Greenwald wrote a book in which he admitted he supported the war at the time) his reasoning was that he wasn't a public figure and so never publicly advocated for the war ... which of course isn't anyone's definition of "supported". IIRC calling the person a liar was on TV, the explanation was on twitter.
- when talking about Assange he repeatedly tried to give the impression that it's liberals who were responsible for prosecuting him when it was the trump administration (the Obama administration refused to do it). Similarly trying to claim that no liberal media figures had criticized this. Again always said in some weaselly way to be technically true but misleading.
- in an effort to claim that Republicans are the real party of the working class, wrote something that tried to attribute larger stimulus checks to Republicans, when the vote broke down almost 100% along party lines
- there was some committee vote that went against trump's plan in Afghanistan that he repeatedly attributed to "Liz Cheney and pro-war Democrats" when like 80% of the relevant committee voted for it including the majority of both parties.
- called someone a coward for deleting old tweets on at least one occasion but often deletes his own old tweets (this is less a lie than just being a jerk tbh).
There are plenty of people who get out over their skis in reporting something where they believe what they're saying ... but the above are all premeditated attempts to mislead people.
> claimed he never supported the war in Iraq, called someone a liar for saying he did
I'm not familiar with the circumstances here, but "supported" does in fact have many possible interpretations, so I can easily see confusion develop around this question.
"Publicly expressed support", as in, "an endorsement" is one interpretation. This happens a lot with candidates running for office. You can even endorse someone without fully supporting their policies. From your description it sounds like Greenwald did not publicly endorse the war.
"Privately agreed with but didn't publicly express support" is another interpretation. Many voters and non-voters fall into this camp, as in they don't go around loudly shouting their policy preferences but obviously they agree/disagree with various positions. From your description it sounds like Greenwald did privately support the war.
"Materially supported" is another interpretation, which goes one step beyond just public support. For instance, are people who send "thoughts and prayers" to victims of a tragedy really supporting them?
I think there's a lot more nuance here than you're allowing.
> when talking about Assange he repeatedly tried to give the impression that it's liberals who were responsible for prosecuting him when it was the trump administration (the Obama administration refused to do it)
"Liberals" are responsible for prosecuting him *right now*, so they are the people he's trying to shame. Who started it isn't really relevant is it?
I have no particular opinion on the rest so I won't bother looking up the data to confirm and reply.
> I think there's a lot more nuance here than you're allowing.
I don't think there is, but even if there is and I'm wrong about that, he's still wrong to call people liars over it.
> "Liberals" are responsible for prosecuting him *right now*, so they are the people he's trying to shame. Who started it isn't really relevant is it?
IIRC he was blaming liberals for it even during the trump administration! Plus ... your reasoning doesn't excuse the misleading statements he has made where he tries to make it sound like liberals are entirely responsible for the situation.
I think they're trying to be, but are held back by the fact that they simply don't have equal representation of right-wingers and left-wingers on their staff.
Reuters and The Economist are the most informative and least biased I've found, though they're not perfect and you still must use your judgement when reading them.
I also find the Economist's analytical writing to be quite good. Which is not to say that I always end up agreeing with their take, at all; but they are clear and generally level-headed about it. I have to put some thought into why I disagree when I do.
Also the dry English wit that sneaks through now and then makes me LOL.
The stuff that they might as well not even publish in my experience is their annual predictive-punditry special issue and their quarterly "1843" lifestyles, I guess?, thing.
>The leaked materials show the Thomson Reuters Foundation and BBC Media Action participating in a covert information warfare campaign aimed at countering Russia. Working through a shadowy department within the UK FCO known as the Counter Disinformation & Media Development (CDMD), the media organizations operated alongside a collection of intelligence contractors in a secret entity known simply as “the Consortium.”
I most definitely don't have an opinion on the matter and could go either way, but you're saying "Reuters and the BBC were paid to do government propaganda" as if it was an established fact, which means that you're treating the Gray Zone and YouTube/Jimmy Dore as reliable. How did you determine that? Saying that a source is unreliable by relying on another source is a somewhat circular/unsolvable conundrum (I'm not saying I have an easy solution to that problem either...).
+1. I often disagree with the analyses and interpretations of The Economist but I feel pretty confident that something happened if they say it did, and that they'll cover most major international events. News agencies like Reuters, AP, AFP, have a business model predicated on reporting facts, so I tend to trust them as well.
I would also rely on triangulation between conflicting sources. If CNN and Fox agree that something happened, that's pretty solid ground IMHO.
The Economist probably is the best "mainstream" publication. Back when, they ran a cover article on "The Art of the Lie", about how crazy "alternate facts" spread around on social media and partisan news sites. One of the examples of this sort of obvious falsehood which wouldn't go away, was Breitbart, et. al. pushing the idea that Hillary Clinton had some sort of undisclosed medical issue. I mean, come on.
They published on September 10th. The next day, of course, Hillary collapsed in public, due to an undisclosed medical condition.
So that's part of the problem: the Overton window, at least for the mainstream media, isn't wide enough to encompass all of the actual reality around us. The idea that Hillary had some sort of potentially-disqualifying condition, even temporarily did, was totally unacceptable; and so they didn't accept it. (Even afterwards, the media didn't vigorously follow up the remaining questions about mis-aligned eyes, long disappearances from public view, etc. the way they dog-piled McCain, say.) But reality is nothing more nor less than that-which-we-must-accept-regardless, isn't it?
The hilarious thing about that one was the total denial on very mainstream high quality podcasts about her "collapse". Some very serious very "just the facts" people insisting there was literally zero mentionable/noticeable about that video. To the extent it made you wonder if they had watched it. But they clearly had.
Was yet another eye opening moment, and I had thought my eyes were already super wide.
Since I don’t think I’ve self promoted in a bit and literally just answered a quest on my recollection about self promotion I have a whole substack about this, Extelligence.
The top post is the relevant part. Basically, Bird Watch for the entire internet.
Scott recently reviewed Malleus Maleficarum. If you enjoyed the review you might like a podcast I did very recently with Stacy Schiff on the Salem trials. I guess what I particularly took away from what she said was how important due process is. Well that and not relying on spectral evidence that nobody in the court can see except the 'bewitched'. In fact if I were on a jury today I am not sure I'd convict anyone - even confession evidence seems hopeless. Lots of the accused confessed - maybe to save themselves and maybe even because they began to think they might be guilty. In Stacy's book she has a reference to a two part article in the New Yorker called Remembering Satan. This is a recent case and is absolutely terrifying. Here are links to the article and the podcast.
If people are going to post questions like "is it OK to post a Wordpress blog" here rather than on the linked post, the experiment was clearly a failure.
Was curious after the Geography of Madness review: does the social contagion version of a disease (eg delusional parasitosis) improve if the patient changes social settings (eg moves out of the house where one roommate is spreading the delusion)? Or once your prior is trapped, does the original social cause not matter anymore? Any research on this?
It’s interesting seeing AI hit the mainstream, and I’m curious if the anticipation of AGI is going to begin affecting career planning, retirement saving, etc in a way that actually moves markets
I’m trying to figure out where to invest for the compute infrastructure. I think the chips are going to be a bottleneck but haven’t done any math on it yet.
This will be really interesting to see. Have been considering changing my pension contribution down a little bit. I worry greatly about the threat of AGI but clearly not enough to change my own spending habits too greatly. At what point do we start yoloing in a panic
Cycling websites propose cities institute "green waves" - https://www.centreforpublicimpact.org/case-study/green-waves-bicycles-copenhagen - patterns of traffic lights such that cyclists traveling at a fixed speed always hit all green lights (they might coincidentally get to the first light when it's red, but then they'll wait until it's green, and then after that they'll be "in sync" and always hit all green lights).
Why can't cities do this with cars traveling the speed limit?
This was once common in small towns in the midwest US. The speed for which the lights were synchronized was posted on a sign as you entered town. It may still be common practice, I haven't driven through the midwest in many years.
> When multiple traffic signals are close together on a highway, they are normally interconnected and are often operated as a coordinated system. Coordinated Signal Systems are probably the most misunderstood signal system networks. A coordinated signal system network does not mean that a driver on the main roadway will get green lights all the way through the network. But the system is set up to get drivers through the entire network as quickly as possible.
> There are times and locations where this can mean sequential green lights through the whole system, but that is not always the case. Traffic volumes, distances between traffic signals, sizes of intersections, pedestrian activity, and other factors may not allow for truly sequential green lights.
> Many coordinated signal systems are not operated in coordination all the time, but instead only during peak commute hours. This is why networked systems may operate differently depending on the time of day, which may mean more or fewer stops for users.
> Coordinated systems can also cause more headaches for side road users. Coordinated signal timing is typically based on the largest and most complex intersection in the network. This can result in longer than needed times at smaller intersections. If you find yourself waiting at a side street with little or no conflicting traffic, and it's in a coordinated system, this may be why. However, as with general cycle length, even coordinated systems are set up so that no system in the network has a cycle length greater than 240 seconds, and are as short as possible.
Sarasota, Florida did that for a major traffic artery. Didn't realize it was unusual. Could it be that lots of places do this already and you don't know about it? You can't do it for every street, of course, because adjusting the lights of road A so that someone going the speed limit gets all greens is not necessarily compatible with people on roads intersecting road A also getting all greens. Though I'm sure a computer could set up the lights to maximize the number of driver minutes people spend on all-green routes.
About 10 years ago, Los Angeles finished a major project to get all the lights on a central adaptive timing system so the timing can be adjusted on the fly according to demand. They claimed the average travel time during rush hour fell 20%. Of course for some people their travel got slower (or at least their perceive travel time got slower).
In DC, where i have the mosts experience commuting, the main avenues traveling into the city from the MD suburbs are mostly all timed to give green waves at just below the speed limit. The timing swaps for the afternoon rush hour.
What if the cyclists are not travelling at fixed speeds? All kinds of people cycle, from school children to older people. Not everyone is 30 years old in semi-professional kit cycling at a rate of twenty mph while timing themselves so they get a particular set of exercise in while cycling to work.
So I can see slower/faster cyclists not being in sync with the 'wave' and hitting red lights anyway, while car etc. drivers will just be pissed-off at the holdups.
In portland you get get a green wave by traveling at 12 mph which is pretty slow on a bike.
And yes faster cyclist miss the wave. The point isn't to allow everyone to travel as fast as they can, its to improve the total capacity of the traffic network. Most cars at rush hour are stuck in traffic and stop at every light anyway, this doesn't make that problem worse.
My city has hills, and traffic lights optimized for cars. Trying to hit all the greens on a bike is a terrific trial, especially on some unhill stretches.
The idea is that the lights are timed differently than they normally are so you can have a green wave on a bike. In portland this speed is 12 mph which is a very reasonable speed. The green wave benefits cities with hills because if you get on one you are unlikely to have to stop mid-hill.
I would imagine that "cycling activists" they'd have them optimised for a man in lycra who cycles every day on a $8000 bicycle because screw everyone else, they're doing cycling wrong.
Your comment and trebuchet's don't reflect the current culture around cycling and walkability activism. It is not dominated by Lycra wearing weekend warriors or people who think they can do all their errands all the time on a bike.
The vast majority of people activating for cycling in cities are looking to make their commutes safer and lower the barriers to new people cycling on their commute. The mode policy requests are usually very reasonable and have been shown to have little to no impact on car travel times.
They can. The one-ways in my city are timed so that you can drive without stopping as long as you go ~5 under the speed limit. It's a huge pain when one of the lights gets out of sync, though.
I suppose it’s hard to optimize for car and bicycle traffic. Plus Denmark has few hills so you don’t have to account for climbing and descending bikes.
Weird, I did some work on this in college and someone reach out and asked for a meeting earlier today. Glitch in the matrix I guess.
So: they can and do. Haven't you noticed while walking that you often see the next green light even when the full stop might be for a few minutes? This is because you are moving more slowly than the system is anticipating. If you were moving at full speed you'd hit a green wave. (There will never be perfect green waves since waves can interfere with each other. But certainly it can be much better than a random alternative.) Traffic management is done through a series of green waves that flow over the city down street systems.
The issue is that it's dumb, constant, and has no feedback loops. There's no way for the system to get an average speed of cars in an area or often even whether cars are at a specific point. (Let alone things like emergency vehicles which often operate based on sound still.) Google kind of gets it by tracking phones. But this is fairly inaccurate. A better system would be to measure speed at both random intervals (since people act differently around lights) and at lights. But this has a groundswell of local resistance because every jurisdiction that does it immediately sets about ticketing speeding violators. But if you had that you could then dynamically control the waves and traffic.
Since cities don't they instead figure out the roughly average speed and then index on that, meaning it's constant (or semi-constant by time). This works roughly well enough but is ill suited to large surges in capacity and the problems get worse the more agents (cars) you introduce into the system. The famous example here is that in many cities a single car going 10mph below the anticipated limit can cause thousands of people to get caught in a traffic slowdown.
This is one of the promises of autonomous driving. Smart cars can transmit their location, speed, etc which can then be given to municipalities for their lights system and the cars can know what the speed the wave needs is. Though obviously we're not there yet.
I actually met a Chinese fellow who was trying to get his local government to implement smart traffic light technologies since they already have cameras/monitoring everywhere. But they weren't interested. Of course they have reason to not be interested based on how construction and government revenues work in China.
This is what I was referencing with "done by noise." I wasn't aware of the lights but I was aware of lights that are set to change in response to sirens. With the same "you can abuse it but illegal" thing.
They can and many places do. It’s much more common to set up green wave timing for cars than for bicycles (this is the first I’ve heard of the same approach being used for bikes instead).
Cities do in fact already do this for cars, where they can. But it’s hard to get it going in both directions unless it takes close to half a light cycle to move from one intersection to the next. I think bike and pedestrian travel speeds are better for that than car speeds, at usual block sizes.
If you’ve got multiple perpendicular arterials that intersect, then it’s very hard to do it on all of them at once, even in one direction.
Isn't this already the case for cars in areas where it's possible? It's hard in cities because there are a bunch of variables to juggle like not having traffic back up in specific areas. It seems like it would work even less well for bikes because I suspect the variation in speed among cyclists is greater than that among cars. Probably only makes sense in areas where bike commuting outpaces cars.
edit: yes, "On San Francisco's Valencia Street, the signals were retimed in early 2009 to provide a green wave in both directions, possibly the first street in the world with a two-way green wave for cyclists"
Why couldn't it? Imagine that in the east-west direction, you had a 300 foot line of cars every 1000 feet, and in the north-south direction, you also had a 300 foot line of cars every 1000 feet (forming a sort of dashed-line pattern). Then you just staggered the lights so that the dashes in the two lines were slightly out of sync, and each dash was moving at full speed through the gaps in the other line.
Outside of a central city, there will generally be more traffic traveling in one direction during one part of the day than in the other direction. Signal time is usually timed with this in mind.
Additionally, most streets on which you would do this kind of timing are also likely to have a need for dedicated left turn lights which will further complicate having waves in both directions.
Hmm, what's your point? Are you suggesting all the people who have been cancelled deserved it because they were assholes? And even so, I'm not sure even an asshole deserves to be cancelled.
Thats an unnecessarily rude reply. At the risk of participating in the rude replies and taking a cheap shot, it seems you have the literacy problem in this exchange.
In your comment, Ragusea's "everyone of them was an asshole" refers to the group of people Ragusea had met who had been cancelled. In George Herold's comment "all the people who have been cancelled" refers to the whole universe of people who have been cancelled. Ragusea's set of people is a sub-set of Herold's. So there is no literacy problem in Herold's comment. Herold is asking if you are suggesting we generalize from Raguseas specific expereicence. Seems a perfectly reasonable question.
Just because journalists are backstabbing cut-throat assholes doesn't mean that social structure and power base isn't used to target other people too. Most cancellations are not aimed at journalists, and the being told the people at the top of this system rose up in this environment is not really comforting.
Yezhov was a monster who ran the NKVD, then was executed by it and replaced by Beria, another monster, who in his turn was also executed. You may say they had it coming, but between them, many thousands of innocent people were killed.
Meh, this is obviously just another "just world" cope from a person belonging to class that seems physically incapable of contemplating that mob justice is not justice and it doesn't work, it has been tried countless times and it doesn't work, even when it appears to work at the start it doesn't and sooner or later you will find out, typically when you are under the guillotine yourself, only for a poor generation of fools later to make the exact same mistake again later.
It's curious how thinly veiled the euphemism of "Anti-Social Tendencies" is, why don't you just be honest and come out and say "doesn't win popularity contests" ? I'm sure middle eastern or pakistani Atheists have plenty of "Anti-Social Tendencies" in the eyes of the wider societies they live in, so does most autists for that matter.
And it's hilarious how the conjunction between that and the kind of personality that does the cancelling works, those types are the very definition of "conflict-seeking, anti-social", so do they deserve cancelling as well ? should we follow every cancelling wave with a counter-cancelling that cancels the cancellers because they are/were anti-social ? but then we would need a counter-counter-cancelling that cancels the first counter-cancellers, when does it ever stop ?
Or, and this is likely what cancelling apologists believe, should the cancellers be a special class of people who are allowed to behave anti-socially at will in order to punish other anti-social tendencies ? but wouldn't this eventually end in very little anti-social behaviour punished, because most anti-social psychopaths would learn the lesson quickly and join the socially-approved group where they are allowed to be an anti-social asshole without repercussions ?
There was plenty to disagree with in that Adam Ragusea podcast. He really seemed to think that cancel culture is not a serious problem. He also seemed convinced that JK Rowling is a bad person, despite offering no evidence of her ever saying anything bad.
Nevertheless I enjoy his podcasts. It's good to listen to people with the 'wrong' opinions.
Are they defaming her? Are they doing so maliciously so as to harm her monetarily? Sounds like yes in both cases. That's illegal. Why is she supposed to just let that slide?
No, but the majority of readers of this blog are based in the US so would interpret "break the law" to mean US law, which is not true.
Additionally, if Rowling can only hope to win her law suits in the UK, where laws are much more favorable to the plaintiff, then that weakens her arguments/complaints in my view.
Depends. Free speech has never included the right to call people names. The speech has to have genuine content to fall under the normal definition of the phrase. That's why yelling "You're an asshole!" at someone is not protected. No content, just vitriol.
Eh, hard disagree. Free Speech is any speech that doesn't have provable physical outcomes, and JK Rowling is a big crying baby if the claims about her suing detractors and forcing them to apologize are true. If nothing else, it reinforces the power and taboo-ness of the "trAnSPhoBe" word, while laughing it off will ridicule and diminish it.
Breaking your principles to win is only allowed under extreme circumstances, and JKR is a fucking billionaire, she's not in any immediate danger of losing from people calling her names on twitter. "Has to have content" is subjective, that's just like the wokes who say "Hate Speech is not Free Speech", well what's "Hate" ? whatever they fucking define this week.
For heaven's sake, if you can't prove it will physically harm people then it's free speech.
That's fine, you're certainly allowed your idiosyncratic definitions of a term, or philosophical concept. I'm just observing that it is not consistent with a definition that is held much more broadly. You can observe that Rowling is a goopy hypocrite by your definition, and she can observe you're an extremist adhering to unworkeable principles by hers. You're certainly entitled to assert she's breaking *your* principles (or definitions of "free speech"), but I don't think the evidence supports either that she is breaking *her* principles, or that she has no (consistent) principles at all.
I'm pretty sure that "you're an asshole!" is entirely protected under the First Amendment, and under the general umbrella of free speech as commonly understood. It is not an idiosyncratic definition of the term.
Specific accusations of criminal or immoral behavior *may* be barred, depending on the context, but it's not clear that "transphobe" in this context should qualify. I suppose the British courts will decide, for Britain at least. "Asshole", is protected everywhere free speech has any meaning.
Indeed, ensuring that any lowlife commoner can call the King an asshole if they want, is sort of the point (even if we might prefer they express their position more eloquently).
I for one appreciate a rich billionaire who cannot be cancelled fighting the fight that most people literally cannot afford to fight. If they are slandering or libeling people, and get called on it so that they stop doing it, that sounds like a win?
It's one thing to have an opinion that she's a bad person, or to not buy her books. It's another thing to accuse her of something specific that comes with both reputational and monetary harms, especially if it's partly or wholly false. I'm sure you remember the very recent attempts to get people not to buy some new Harry Potter game, because it's related to her, right?
I'm not on the side of the cancellers, so I of course I wish their bitchy mouthes would stop screaming.
The issue is that what JKR is doing is not significantly doing anything meaningful outside of her own interests, cancellers are cowards and they would quickly find easier targets to bombard, targets who can't sue. So what did we (== free speech enjoyers) gain ? nothing. JKR gained, but who gives a shit about JKR ?
If JKR was smart she should just keep taking flak and using it to increase the sympathy for her. She's literally unkillable, the campaign against the game did nothing and was free marketing.
Not unless it has zero basis in fact. A "phobia" is an *unreasoning* fear. I have a "phobia" if I fear black cats, because black cats can never (seriously) injure me. But if I'm afraid of tigers, even tigers in zoos behind bars, this is not a "phobia" because tigers certainly can do me harm.
You can argue that my (reasonable) fear of tigers should not prevent me from visiting the zoo, if I have good reason to believe the cage is stout, et cetera, but that's not the same thing.
So the question comes down to a factual question: *are* any transwomen men in drag trying to access women's spaces to assault them? If the answer is no, absolutely, that has never happened -- then anyone who makes that assertion can reasonably be accused of a phobia. If it happens so incredibly rarely that we can only find 1 in 100,000 cases where it has happened, then maybe we can also reasonably call that a phobia. If it happens infrequently but not rarely, we have a problem, and using a word like "phobia" begins to seem like more a denial of the problem and an attempt at tribal othering than a fair criticism.
"this is hypocrisy" does not equal "hypocrisy is the only sin" and I don't think caricaturing someone's argument so uncharitably will go over well here
Alternative hypothesis: the ones with "strong anti-social tendencies" are easier to *remember*. So when you think about who was cancelled, your memory returns the assholes first.
Imagine that Tom is a nice guy, and Bob is an asshole. When Tom's tweet made 20 years ago is taken out of context and used to ruin his career, Tom loses the battle without much of a fight. He finds a different job with less exposure to the public. Five years later, no one remembers Tom.
When the same happens to Bob, he writes a purposefully offensive statement, then doubles down. He provides lots of interviews to anyone who cares. Ultimately, he also loses his job, because there is only so much demand for contrarians.
Five years later, when you discuss cancel culture, people say: "well, I remember Bob getting fired for his opinions, but he was an asshole who has made many enemies, so he had it coming... uhm, I do not remember anyone else being cancelled... so I guess it is not such a big problem after all".
Falsification: You could start with making a working definition of a "journalist", then choose a year, and maybe a city... and then collect everyone who in given year lived in that city and fit your definition of a journalist. Then check what happened to those people afterwards.
"They may not have been guilty of X, but they were sure guilty of something" is a frequent fallback by systems of justice that are found to be punishing the innocent.
It's so convenient, and so frequently fails on close examination, that I think the presumption has to be against it.
(And it's practically a cliché for the person making that assertion winding up the next in the tumbril.)
It's also a dodge around determining what the actual objectionable behavior is, and finding a way to deal with it openly.
Judging people as bad regardless of whether they have actually done a specific bad thing they are accused of certainly is. Searching for new reasons to justify your hate doesn't make it okay when you find a new reason.
Human behaviour can be excused because nature is amoral ?
So I can sexually harass women at a late hour of the night wearing provocative clothes to my heart's content ? it's not just or unjust, it's just mechanical consequences.
I think Lo-Fi is calming because distortion, filtering out frequencies, and addition of white noise makes the music less punchy and more sonically static. My kids say lofi sounds old or nostalgic, and they have hardly heard any 80s music. There is a youtube channel called something like "music on a record player in the other room on a rainy day." The lofi effects mimic music heard in the distance or in the other room. There is a nothing like hearing your favorite song waft through the windows from a passing car on a warm night.
My understanding is that most internet Rationalists do not listen to Japan Audio Society High Resolution Audio via closed In Ear Monitours and performant Digital Audio Player in partially optimised home residential environments. And thus they are already implicitly opting in to an essentially lossy, low-fidelity, high-distortion setting for their listening sessions. Some use sounds and noise qua anxiolytic or mild anxiogenic to be pleasant. But Kevin Simler of the Melting Asphalt web log almost certainly got it wrong re his analysis of modern human brain organs and the radically shifting landscape in the Qobuz art market for High Resolution Music.
Yeah, imperfections make things more naturally listenable. Since you mentioned noise, various types of noise (there are many) have been used as a deliberate production tool (e.g. side-chained pink noise against snare drums) in almost every 80's rock album and still now. People fall asleep to white noise etc.. Noise is an entire genre. Why is there a cult following around The Caretaker "An Empty Bliss Beyond This World"? Partly because it's an album designed to make the main musical elements as uninteresting as possible, but instead focus on degradation, hiss, and other analogue artefacts that make things sonically compelling.
The Lo-Fi thing has become a total joke though, honestly. Zoomers will binge-watch Stranger Things and think the sound of an 8-voice polyphonic synth being multi-tracked and passed through $10k worth of studio-grade effects is "low-fi" because the show is set in the 80s and back then technology was meant to suck. If this demographic ever had to listen to a cassette of some anonymous blues guitarist they would probably writhe in pain, because it's not the fidelity per se they are referring to but the nostalgia, except it's a nostalgia for a time they never experienced in the first place.
Digital recording at the professional level emerged in the late 70s and was ubiquitous by the late 80s. At the same time analog playback was still the norm. Yes CDs existed but most people didn't have them yet.
In the days of analog recording studio time and the physical media that the music was recorded and mixed on was expensive. Mistakes by artists and engineers cost real money on a product that often was not a great money maker; most acts did no go on to be stars. Producers had to come out of pocket on the front end of a new act that was a risk. This produced an industry of skilled professionals, both behind the boards and the instruments. An alarming number of very well remembered musical acts don’t play a single note themselves on many of their early records despite the credits. Even for established acts “ghost” musicians, paid for secrecy, remained a norm as long as analog systems did. It was too expensive to let popular musicians keep making mistakes in the studio, plus studio time is reserved in advance and there is a chance you will get no usable recordings at all if you let them try. If you enjoy rock music from the 50s -60s, you’ve listened to many songs that aren’t being played by the good-looking young men on the album cover. There’s a good chance its actually the Wrecking Crew. (As an aside, country music largely avoided this convention by just being honest about it. Country promotes the front man/women, not the ‘band’, who are largely interchangeable pros who specialize in studio or tour, but seldom both.)
With digital recording, the ‘wasting money and time’ element largely assuaged. I heard an interview with Eddie Van Halen once; his career spanned this transitional period. With analog he would score out all his solos in advance, practice them a lot, go into the studio on a special day, completely sober, bang out 3-5 iterations and live with whichever was the best one. After digital recording he had a small guitar studio set up at his house and could play 100+ variations and keep the one he liked. Still, the final product was analog, so you still needed those pros behind the board to make the final product sound good. After all, someone was still coming out of pocket on this physical media before it ever sold a unit. There was still risk.
Things are different now. I have a digital recording “studio” in a spare room that cost a few grand. Its capable or producing sound quality unheard of in the analog days. I, however, am not. For me this is a big part of why older media has a pleasing “timbre” or whatever. Everything is polished as well as it could be with the tools of the era. Everything that we still like though, survivor bias and all that. I had a local DJ who’d be in his 80s now assure me there was plenty of garbage back then too that time has filtered away.
On the other hand, the modern super-clean pro-tools enhanced, quantified to a click track and mixed on an overpriced Macbook sound is (potentially) available to everyone who can scrape a few grand together and a lot of it is crap. There is no up-front financial risk to producing music apart from the gear, we mostly just waste our time. Only the very best work product from the past sticks to our collective memory, but right now anyone can have a Soundcloud.
"Only the very best work product from the past sticks to our collective memory" -- this is true of everything. Movies, literature, TV shows, whatever. I like to watch old Hollywood movies on TCM sometimes and it is striking how clumsy and poorly acted a lot of them were. For every "Casablanca" there were 10 or 50 generic potboilers that now all blur together as a wall of cliches.
Sticking to music, years ago I stumbled on a website where you could click and hear a minute each of the USA Top 20 or something hits of each calendar year. I tried it for the years when I was in high school and it quickly became....painful! Because I was being freshly reminded not of the great songs that I remembered anyway, but of all the other absolute dreck that I'd forgotten but now had to admit my generation also eagerly listened to at the time. And that second category way outnumbered the first....
Japanese aesthetics traditionally prize imperfections in some contexts, part of what is meant by the term "wabi-sabi". Interesting area to explore!
In terms of music, though, I think part of the reason those imperfections are prized is that they can add complexity of texture, plus variety, to the sound. People find complexity enjoyable. Also, certain kinds of "lofi" sounds more specific positive aesthetic qualities. For instance, downsampling can reduce high frequencies which would be perceived as harsh; it can impart a sort of "unreality" to the sound, in a similar way as cinema's low framerate; and it can give a quality of unity to disparate sounds.
The specific innovation that precipitated the present low-fi thing might be that GarageBand etc. gave the masses decent-sounding DSP (digital signal processing) for the first time and *suddenly* everyone wanted the "ugly" sounds their computers made before
Conversion (for reasons other than marriage) is strange to me these days, one doesn't think of religion as something you can shop around for but of course that's how it first proliferates. Rationalization ought to have nothing to do with the validity of a faith, according to the faiths. Belief is not (and ought not, according to the gatekeepers) be contingent either empiricism or ulterior benefits - there is but the "proof" of selected scripture, fearful calls to accept dogma lest one not find Salvation, and fervor (in a group or otherwise) being explained as spiritual experience. Of course in reality there are other reasons many people have converted and changed the course of History.
For us plebs there are only a few possibilities I've encountered: a) one believes because they "felt God / the Holy Spirit / The Truth" once exposed to a certain brand of religion, b) one believes because most of society does and it's a useful heuristic, c) one believes because it is comforting, for some reason, d) one believes because it is useful for some other reason. The explanation given is almost always 'a', with no cognitive dissonance over the fact that it's the same given with other competing religions. The defensive reasoning is that those other followers are wrong, what they experienced is not Truth, but some "quasi-Truth" (or nil, they're faking). And yet, this could not be for the believer in question.
Among the more Liberal practitioners (cultural Catholics), there is popularly a kind of watered down belief to help reconcile with multiculturalism, which goes "all religions are equally correct and allow communication with God". This absolutely, definitely does not come from the top / the Church. However the "aesthetic" / cultural type of practice does seem more agreeable and is less likely to breed hate than fundamentalism, which necessitates that everyone else is wrong and may or may not be heretics.
If the validity and Truth of a religion is evaluated by your subjective experience, and not for reasons like "x/y/z aspects of this religion appear more rational to me", can your faith really be confident, or does it functionally manifest itself more like a gamble for the afterlife?
Why does God let evil happen if he doesn't want evil and he is able to prevent evil and he is aware of all the evil that happens and keeps happening and will happen ?
Why does God create atheists if he knows they're going to end in hell ?
Why does God create a universe that can be parsimoniously and convincingly explained without him ? Why is his idea of evidence a bunch of books that can be traced back to human writers, and of which there are thousands filled with conflicting and contradicting stories, and a bunch of miracles that nobody living in a long long time ever saw, heard or otherwise reliably sensed ? Why can't God just write about fusion or programming languages in one of his ancient holy books to utterly and definitively prove that they have unearthly origin ?
Why does God need books, words and intermediaries to speak to us at all ? He can't just utter his thoughts in our brain directly ? Even human musicians can evoke meaning and emotions in me without opening their mouth.
Why does God not endow his adherents with any obvious advantages or virtues ? Why are catholics not richer or more intelligent or better educated or more honest or more beautiful or stronger on average than the adherents of any other religion ?
Why is Christianity historically recent ? What was God doing during the thousands of years before Christ and Moses and Abraham ? Do the people born in those times (and during all the times but in places not accessible to Christianity) go to hell ? heaven ? both would be unfair.
Why is the Universe ? what purpose does it serve ? if it serves no purpose, isn't he stupid to create with no purpose ? If it serves a purpose that he is reluctant to tell us, isn't he cruel for creating us with brains that love to seek purpose then dropping us in an existence with no obvious purpose ?
How can I trust that God loves me when there is no one of his own class to love ? How did he know about love ? Love requires that the object of love is something or someone seperate from the lover, but nothing is seperate from God. The entire Universe is a piece of him, his love is literally narcissism. Do I "love" my arm ?
Why is God ? Is he an accurate evaluator of his own usefulness or impact on the Universe ? that is, can we trust him to commit suicide if he finds that he is a net harm to existence on average ?
How does God know that there is no GoD above him that wants him to worship just like he wants us to worship ? How can we know ?
Even for an agnostic like myself, a lot of your questions obviously centre round the theme of free will granted to us (and other animals up to a point, and even inanimate matter in its random aspect). If there was no possibility of evil, then we would be like mechanistic puppets with no choice and thus no merit either.
Also, God (if he exists) caters not just for us but presumably for all future generations, and what seems like an evil to us, such as a volcano or earthquake that kills people today, may be a benefit to people in the future, in fertile soil for example or fresh land created by plate tectonics.
There is a verse in the Bible's Book of Genesis, after the account of creation, that seems to me revealing: "And God saw that it was good". Why would that be worth mentioning if it was a foregone conclusion? It suggests to me that the author intuitively understood (or was divinely inspired to know, if one chooses to believe it) that God is an experimenter, and endued people and matter itself with a tendency to make mistakes. He thus chose to be uncertain of the outcome of his creation, with most outcomes perhaps being unfavorable, like evolution. As the Man himself said "Many are called, few are chosen". ..
Free Will is a paradoxical notion that few (if any) understand, I certainly don't, do I have free will ? I didn't choose my body, whether I prefer to be born or not, where to be born, what language would I speak first, just to pick a few things. Those things pre-determine metric tons of things about me. Where is Free Will in those things ? If God is such a fan of Free Will why does he create people in situations where there is a single course of action (the other being Death), did God think about the Free Will of all those millions of Germans he created between 1920 and 1950 ?, the tens of millions of Russians he created during the same period ? did he think about the countless poor women who sell their bodies to eat and feed their families everywhere since time immemorial ?
The Universe sure as eggs doesn't seem so optimzied for Free Will, and the vast majority of our life is spent doing shit we would rather not, so what's a single forcing function more ? One that enables Good and not Evil and cruelty for a change ?
Also, if Free Will is so good would it be present in those select few who go to heaven ? If yes, then does that mean that those in Heaven are not, in fact, infalliable and can be tempted to commit sin and suffering while in Heaven ? Wouldn't that mean that Heaven isn't so special after all ? Or does God have a secret sauce that makes Free Will works while not making us dumb machines, in which case he can save incalculable amounts of suffering by applying it to this Universe instead to make it more like Heaven without sacrificing the precious Free Will ?
>what seems like an evil to us, such as a volcano or earthquake that kills people today, may be a benefit to people in the future, in fertile soil for example or fresh land created by plate tectonics.
This, as they say in my native tongue, is "an excuse even more uglier than the wrong committed". So God's excuse for all the endless cruelty that plagues this Universe is that it enables better things in the future ? The OMNISCIENT, OMNIPOTENT and OMNIBENEVOLENT creator, can't make good (like he made the entire universe from nothing) without inducing untold amounts of evil in the process ? Why would people worship a medicore engineer like that loser ?
There are centuries of theist philosophers who address these exact questions far more eloquently than can be addressed in an internet comment. I'm seeing your future: it's a trip to the library.
With all due respect, the philosophers you talk about are cheerleaders for dogma. They pretend-play thought, but they are really doing propaganda/sophistry, because the conclusion is fixed in raw crusty stone before you start a single gear of your brain into action.
Of course, this can still be useful. Fixing the conclusion, although done with shameful coercion and condescension, acts as a "Rules of the Game". i.e. "Let us all **pretend** that God just unquestionably exists, let us make the punishment for suggesting otherwise Death or Worse, just to strengthen the pretense of course, what are the answers to <questions> given all of this ?", it might be a good and useful game, depending on the exact nature of <questions> and how they are phrased, and tons of smart people participated in this game, I'm sure.
But make no mistake, this is not how thought should work, and any intellectual work or play done under those cosntraints is inherently suspect and lowly.
>you see that people who regularly participate in religious services generally report better life outcomes in terms of health, happiness, and so on.
This seems true for any traditional/religious community, not just Christianity in particular. Which to me is by far the strongest contemporary non-metaphysical argument for religion vs. atheism. Atheism may be better justified epistemologically, but until atheists are able to use their enlightened rationality to actually implement a better society, their smug sense of superiority is undeserved.
>This seems true for any traditional/religious community,
My exact question was :
>>>Why does God not endow ****his adherents**** with any obvious advantages or virtues ?
Assuming God is a fan of logic, Religions can't be all true, a single religion being true will invalidate almost all or the vast majority of others, by pure deduction. So "his adherents" must be the adherents of a single religion or a very small group of related ones, and given that *all* followers of any religion reap the benefits you speak of, hence my question.
Religions and traditions are good to people because this is how people evolved, almost everything monkeys do together make monkeys happy. Singing together, eating together, just watching paint dry together make us happy.
And this atheist, for one, does love non-harmful traditions, and does think that people need something bigger than themselves, to worship and live for and defend. Start with your family : I personally think men should worship their mothers and wives, and women should worship their fathers and husbands. Worship something you can see, whose effects on you are measurably Good, whose obvious flaws are not something to deny and defend hysterically against all reason, but to acknowledge and embrace and tolerate. Next you can have a go at more abstract vaues : Love, Mercy, Justice, Beauty. Though we obviously still need personification : the godess of Mercy can be a nursing mother, and that of Beauty can be a small girl.
Where Christians lose me is when they make the bizarre leap from this to "A God whose evidence of existence is a 2000 year book written in an extinct language by known human authors, and a mother whose claims of virginity violates every single thing we know about how mammalian reproduction works", why ? just why ? there are literally millions or billions or infinite ways to worship, why this particular arrangement of extremly arbitrary collection of claims and practices ? I will tell you why : because over the course of 2000 years various coalitions and authorities saw it fit and useful to brutalize people to believe it, and without them Christianity wouldn't exist. Just like the countless religions and cults that come and go on the daily since humanity's dawn.
>atheists are able to use their enlightened rationality to actually implement a better society
What a bizarre statement, when did atheists ever claim that their specific differentiating-from-other-groups goal is to make a better society ? this is **Everybody's** goal, and the religious are no good at it than any other group of people. Religions are hilariously and notoriously bad at co-existing, for example, since you can't co-exist with people you deem to be misguided and blind to the fundamental truth of the universe. This mainfests in multi-religious societies for example (Lebanon, Iraq, India), but even a 100% percent religous group that is serious about its religion often finds international relations difficult.
What "Atheists" (a ridiculous loaded term that is obviously false if interpreted naively, every atheist is really more like an agnostic who thinks most religions are bullshit) really want is for you to not surrender your life and your morality to the dead hands of mad dead men. Muhammed and Jesus were just ordinary people full of bullshit, their bullshit succeeded by brute force, and anything good they said was a truism during the times they lived in. Their morality is so far below what you can achieve by listening to your mind and heart and just waking up everyday with the intention to be good. Dead men shouldn't control your life and govern your interactions with people and the universe.
My favorite thing about atheism is that religious people are already intimate with it : they practice it with every single religion except the one their mother raised them on. Imagine this, but better : No religion gets a pass, all cruel and false bullshit gets shat on. Marvelous, isn't it ? We have to call this tradition something, maybe, maybe, "No Religion " ?
>The religious established every single first world country, as it happens.
They also established every single other country, including the horribly failed ones, so that's just hindsight bias along with cherry picking in one big fallacy smoothie. Why do you get to pick the Netherlands or Denmark to speak for Christinaity and I don't get to pick Venezuela or Mexico or Namibia ?
Also "first world countries" were not always "first world". Every single one of them have a long history full of... "fun". Their first-world-ness is a consequence of a huge complicated mess of reasons that start with "Make machines that can work 24/7 tirelessly" and ends with "profit". Also mix in colonialism and a bunch of random noise for good measure.
>what with Lenin's mausoleum still standing on the Red Square.
Fair, but again, isn't that a bit too much to ask of atheists ? Herd instincts are ingrained so utterly deep inside every human that I despair of us ever reaching enlightenment and independence without physical brain modifications. That's how lost I think we are.
Atheists identify a single, very important, way that our herd instincts can drive us wrong, and are against it. They cannot and should not be depended on to identify *every* single way we can be mislead and driven against our better knowledge. To save you from Totalitarian Religion, you need Atheism. To save you from Totalitarian Communism, you need Anarchism. Why would you expect 100% intersection between the 2 ?
I like Yudkowsky's definition of rationality including both epistemic and instrumental parts. If your beliefs are so correct, why aren't you rich (in utility)?
Is he rich? I watched him rebuff a couple soft social entreaties from Elon on twitter. I was kind of gobsmacked by it to be honest. Does he have a secret fortune or something he’s leveraging to fight AI Risk from trading or something?
Why specifically Catholicism? I believe in God —with one of those long explanations that are like explaining your dreams to people where they just are listening so that you will listen to them explain their dreams— but not a specific Catholic God. Any reason for that tradition in particular?
I had a religious experience in my mid twenties that flipped me on it but I also know that isn’t evidence or even compelling to anyone else but me so I go through apologetics when taking to others.
Is there an emotional benefit you’re getting from the Catholic Church in particular? Like I don’t even have faith anymore per se because I believe in God the way that I once believed my parents will come eventually if they leave me home alone for a few hours. I know that is probably anti compelling to some people here but it’s where I ended up.
I don’t buy into a lot of the specifics of how the belief in God has been implemented though.
If you are just hedging, Judaism doesn't necessary hold that Catholicism is better than atheism. As with all things Jewish, there's a debate but trinitarian versions of Christianity are generally considered idol worship, and one of the 7 Noahide laws applicable to gentiles is to not do idol worship. So to Jews, you being an atheist or Unitarian or Muslim would be preferable to being Catholic.
But (and I don't know anything about this part) I believe Christians have a higher penalty for not being Christian that involves eternal punishment in Hell, and we don't have anything like that for gentiles who don't fulfill the Noahide laws, so maybe from that sense your Pascalian wager makes sense.
The delta between "Catholic Church" and "Go and baptize people and tell them to live as I said" is super super large.
Not even getting into the issue that the passages saying Jesus said that were written after it had already started, by the people who were benefiting from the hierarchy of such a system.
You would expect if a church was his main goal more of his life would have been about that...
I mean whatever, it is hard to take the story seriously at all as traditionally presented.
So you believe that there was a church founded in the text and just don’t think it was the Catholic Church? This is different from your original statement.
Well, I didn't see a previous version of this, so presumably you should have a good answer to: what made you change your mind? And was this None your usual Rationalist atheism or some softer agnosticism?
How do you deal with the tension of the idea that we were made for a certain end by a mostly stochastic/amoral/etc process (i.e., evolution). (Not an attack, but a genuine question I'd love a coherent answer to)
I guess my question was not super clear. Evolution is an aimless process that selects genomes from random walks in genome-space. It is perfectly consistent to posit that God put natural processes in place to make it so that this process would lead to humans evolving for a particular higher purpose. The tension arises when you consider that humans are largely an accidental by-product of this process, and the same process could have (and may have, in different planets/eras) led to the rise of vastly different entities to ourselves. Given all the selection pressures in place, if we'd turned out to be baby-eating aliens, would we not be equally inclined to interpret our characteristics as meant for some higher end? But surely not all products of evolution can be meant for some higher end - some must necessarily be evolutionary dead ends, if not steps in the "wrong" direction. Do you have an answer to trying to tackle this particular concern?
This was interesting last thread but now it's starting to look like spam. I'll let you answer questions this time around but please don't post it a third time.
Comments sections and such are the natural habitat of the contemporary catholic convert. There's a sense in which the arena of purely written polemic offers a home-team advantage (and has for centuries) to first the catholic and now the tradcat, e-cat, and others of the ilk. There are no other domains where this is the case, so it's natural of you to want to maximize your enjoyment here where it's easy.
OC LW/ACX Saturday (3/4/23) Space Georgism and Music as Human Aposematism
https://docs.google.com/document/d/1ZZiXyQNlYz3sfwRRmXxOM8T-hRA0ijRt2-HPe-eskzs/edit?usp=sharing
Hi Folks!
I am glad to announce the 20th of a continuing Orange County ACX/LW meetup series. Meeting this Saturday and most Saturdays.
Contact me, Michael, at michaelmichalchik@gmail.com with questions or requests.
Meetup at my house this week, 1970 Port Laurent Place, Newport Beach, 92660
Saturday,3/4/23, 2 pm
Activities (all activities are optional)
A) Two conversation starter topics this week will be. (see questions on page 2)
1)Georgism... In Space! Just and proper political economy for an interplanetary civilization
Georgism... In Space! - by Sam Harsimony
2) Music in Human Evolution | Melting Asphalt
https://meltingasphalt.com/music-in-human-evolution/
B) We will also have the card game Predictably Irrational. Feel free to bring your favorite games or distractions.
C) We usually walk and talk for about an hour after the meeting starts. There are two easy-access mini-malls nearby with hot takeout food available. Search for Gelson's or Pavilions in the zipcode 92660.
D) Share a surprise! Tell the group about something that happened that was unexpected or changed how you look at the universe.
E) Make a prediction and give a probability and end condition.
F) Contribute ideas to the group's future direction: topics, types of meetings, activities, etc.
Conversation Starter Readings:
These readings are optional, but if you do them, think about what you find interesting, surprising, useful, questionable, vexing, or exciting.
1)Georgism... In Space! Just and proper political economy for an interplanetary civilization
Georgism... In Space! - by Sam Harsimony
How did Georgism define economic land, and why is it important for tax policy in space?
Can you explain how Georgism can be applied to space resources like energy, matter, and physical space?
How might governments in space subsidize the collection of solar energy?
What are some difficulties with taxing matter in space, and how might taxes need to be adjusted to avoid distortions?
How can physical space in space be taxed, and what factors might be considered in assessing the value of land in space?
How can excludable resources like broadband spectrum and orbits be properly managed in space?
Why is it important to strike a careful balance when implementing taxation policies for space colonization?
2) Music in Human Evolution | Melting Asphalt
https://meltingasphalt.com/music-in-human-evolution/
Other explanations
https://en.wikipedia.org/wiki/Evolutionary_musicology
Have you heard this explanation of the evolution of music before?
What other explanations have you heard?
What do you think of it compared to other hypotheses?
Are they mutually exclusive?
https://www.youtube.com/watch?v=JbxE9myZrsg&ab_channel=Dazza
3D printing a frog pavilion
I've seen talk about ending wildlife suffering, a goal which seems impractical, but while we're at ot. why not look at optimizing for pleasure for wild animals?
While ending wildlife suffering sounds like Fourier wanting to turn the seas into lemonade, that was a heartwarming video. I wonder what other approaches could there be to optimize pleasure for wild animals.
I'm extremely worried about AI risk lately, especially after the twin shocks of ChatGPT and Sydney. I want to do something to help. I live near DC and know people with political connections who could potentially help me get a policy meeting with a member of Congress. Two questions:
1. If I can get such a meeting, what specific policies should I propose? My own ideal is to ban all AI research, but I know that's an impossible ask.
2. Any insights on members of Congress who are particularly good targets to try and meet with specifically? I'm talking to my connections about this too, but I'm wondering if anyone here has knowledge specific to the AI risk field, in terms of which members may be receptive or have been receptive in the past, and have the power/motivation to try to do something about it.
1. Policy is a bit of an open question. This section of the online book Better Without AI (https://betterwithout.ai/pragmatic-AI-safety) may go into that, I haven't read that far yet. He has argued that AI mainly runs on hype, so if public perception turns negative, the whole thing would shut down. So policy may not be as effective as advertising.
2. Representative Ted Lieu wants to regulate AI (https://www.nbcnews.com/politics/congress/ted-lieu-artificial-intelligence-bill-congress-chatgpt-rcna67752), so maybe he has a clearer picture on who to reach out to to throw a wrench in the gears.
Fucking hell, I love that G. E. Moore was dumb enough to argue that holding up his hands in front of him and saying 'Here is one hand, here is another' was enough to prove the existence of an external reality, Jesus, I am going to murder every philosophical position I disagree with with that one.
You clearly didn't understand him if you think that was the sum total of his argument.
I later saw that there's some more to it, but the first link I saw said it was simply what I said. Actually, now that I saw the logical formulation, it really isn't better at all.
The trouble with appealing to common sense is that that is the death of philosophy. In particular, you really cannot appeal to common sense while trying to convince me I don't exist, because come on dude.
> In particular, you really cannot appeal to common sense while trying to convince me I don't exist
Per my other reply to you, I'm not trying to convince you that you don't exist, but merely that you don't have the existence you thought you did. This should not be surprising at all. Most cells in your body are replaced every 10 years. Are you the same person you were 10 years ago? What are the specific properties you ascribe to "existence" and "you"? This is not an obvious question, so why do you think you already have all of the answers?
As for Moore's proof, it's a simple argument in the end. Every argument of skepticism about the external world depends on concepts, evidence and even logic that we derived from the assumption that the external world exists, so if you then turn that around to question the existence of an external world, then you undermine all of the concepts, evidence and logic that form the core of your argument, so any such argument is self-defeating and necessarily less plausible than the external world just existing.
The existence I thought I did? I am perfectly willing to believe I am not actually physical, but that doesn't mean I don't exist.
Am I the same person from 10 years ago? No, but it is the same sentience. The witness has always been there.
About skepticism, it's at least partially derived from observation, but observing the world does not at all mean that the world is what it seems. I do think concepts, evidence and logic are all flawed and are just limiting viewpoints, useful to apprehend certain aspects of the truth but unable to contain all of it.
Likewise to eliminativism, first you had to read some philosophy to reach that conclusion, because you did not always believe you lacked consciousness. Your consciousness had to experience that philosophy, so eliminativism is using consciousness to overthrow consciousness.
> I am perfectly willing to believe I am not actually physical, but that doesn't mean I don't exist.
Nobody is claiming that you don't have some existence.
> Your consciousness had to experience that philosophy, so eliminativism is using consciousness to overthrow consciousness.
Nah, the information processes that constitute "me" threw out the fiction of consciousness after processing the information and being convinced by eliminativism and science as a whole.
It's not the same Moorean argument because what I'm throwing out is the qualia/phenomenal experience but not the perception, and the latter is all that's really needed.
The information processes did not throw out anything, you still have qualia, it's not like your waking life has become equivalent to deep sleep.
I had an interesting reaction the other day to an ACX post: it creeped me out. Severely. This became a whole post of my own:
Truth of Ice, Truth of Fire
https://squarecircle.substack.com/p/truth-of-ice-truth-of-fire
What's the best joke you've ever heard?
I'll start: One of my personal favourites is Norm Macdonald's gag about Bill Cosby on Comedians in Cars Getting Coffee. Everything about that joke is perfect.
That’s really tough. The best? How about the best Shaggy Dog joke? I think I can put my finger on that one. Norm MacDonald on Conan:
https://m.youtube.com/watch?v=jJN9mBRX3uo
One of my favorite norm mcdonald jokes:
No one ever said "life is fair"...
except for the russian
working at the fair
Truly awesome. A friend of mine used to play poker with Norm - he said he was so funny no one noticed he was winning. Perhaps they didn't care. Poker Patreon?
Works best told with a mild russian accent
but I can't do a russian accent in real life nor in text
Another favorite of mine, but works best if you're familiar with the joke it's riffing on:
https://www.newyorker.com/magazine/2013/11/18/guy-walks-into-a-bar
The ‘life is fair’ one reminds me of Melania Trump’s ‘Be Best’ campaign. No definite article in Slovenian either.
For what it's worth, I didn't think that one was even remotely funny.
I have no idea whether it's worth having a second opinion on whether it's worth clicking on a short link.
I never saw that one. That's classic, though!
https://youtu.be/ljaP2etvDc4
Ricky Gervais and Seinfeld discussing the philosophical layers of holocaust jokes is pretty funny, too...
https://www.youtube.com/watch?v=k_3Q9X03Yeg
But here's the only joke I can remember...
A dyslexic walks into a bra...
I can tell that one, because I'm dyslexic, and I have.
The second joke is one I like.
The Last Laugh is a documentary about holocaust humor, and very good.
I still think about that "guess you had to be there" joke.
F-ing brilliant. Note how Jerry is all over the provenance -- he knows it's a gem. I suspect it's Ricky's joke, but by saying he doesn't remember where he heard it, Jerry can't ask for it... Comedians are fiercely competitive.
Considering how many jokes I've heard you'd think I'd be able to remember them better. Also depends what counts as jokes; the funniest stuff tends to be improv or machine translation games.
Not the best, but one that always stuck with me was from Mitch Hedberg. "I went to the store to buy candleholders, but they were out of candleholders. So I bought a cake."
I was reading through an old bookmark and saw that Scott was still looking to review Nixonland eventually. I read the book myself after that banger book review, but found that it's actually part of a quartet:
1. Before the Storm: Barry Goldwater and the Unmaking of the American Consensus (https://www.amazon.com/Before-Storm-Goldwater-Unmaking-Consensus-ebook/dp/B0087GZE32) - "the 60s" (until Watts?)
2. Nixonland: The Rise of a President and the Fracturing of America (https://www.amazon.com/Nixonland-Rise-President-Fracturing-America-ebook/dp/B0013TTKL2) - ~Watts until Nixon's reelection
3. The Invisible Bridge: The Fall of Nixon and the Rise of Reagan (https://www.amazon.com/Invisible-Bridge-Fall-Nixon-Reagan-ebook/dp/B00HXGD5CE) - Nixon's reelection until Ford beat Reagan for the GOP nomination
4. Reaganland: America's Right Turn 1976-1980 (https://www.amazon.com/Reaganland-Americas-Right-Turn-1976-1980-ebook/dp/B083SS4251) - Carter beats Ford for '76 until Reagan beats him in '80 (?)
I've not read Before the Storm or Reaganland yet, but Nixonland flows near seamlessly into Invisible Bridge and I imagine the other two books segway cleanly too. I'd napkin that the whole thing is close to 150 hours of audiobook.
Even though Perlstein is quite partisan (he wrote an '05 book on how the DNC could take center stage again), I'm not sure anyone else had even attempted to write a consolidated history like this of American society/politics/discourse/mood for such periods of time.
I'd grown up around adults who made mention of things like Woodstock and Watergate or the '68 DNC riot, and even someone who still couldn't believe we'd elected an actor (Reagan) as president. It was all ancient history, with the current era being written on TV as and after the towers fell. Reading another generation's political experience connected *a lot* of dots in my zoomer mind. I'd defo push these books at any zoomer interested in politics frfr (or anyone seeking to understand the boomers).
Relevant to Scott and any Unsong readers:
"[Purim Torah]: Should we be afraid of Artificial Intelligence?"
https://judaism.stackexchange.com/questions/133443/ptij-should-we-be-afraid-of-artificial-intelligence
So the aspirin might prevent heart attack thing.
I know there has been back and forth on this and it is not the wonder drug it was thought to be for a while but I've been taking a single aspirin before going out to shovel a large amount of snow for a couple years as a sort of voodoo ritual to protect myself.
I'm 70 now but have no diagnosed cardio issues, and have good cholesterol numbers, good blood sugar, healthy weight, good blood pressure and an echo cardiogram shows no buildup of bad stuff in my heart. In the summer I still ride decent mileage on my bike, though my 'century' rides are all metric now. I also sit and crank my old Schwinn on a turbo trainer in the winter until I think I'm risking dying of boredom.
But... when my dad was my age he died from his third heart attack so I figure taking an aspirin before I go out and perform what used to be a pretty reliable widow maker chore, hell, it can't hurt.
Does this make sense to anyone who actually knows about this stuff?
What you want to be careful about is the fact that aspirin has a significant anti-clotting function, and one thing you *don't* want to have happen is a brain bleed that doesn't get sealed off right away, i.e. a hemorrhagic stroke, either a big one, or even the very small kind that you don't notice but which degrades your brain over time.
I don't have any useful advice about this, this is something you want to discuss carefully with a physician, I'm just observing the tradeoffs. People dismiss the anticlotting issues with aspirin because they think "oh a GI bleed is less scary than an MI" but they forget about the stroke possibility.
I haven't looked at the studies. But this...
https://www.hopkinsmedicine.org/health/wellness-and-prevention/is-taking-aspirin-good-for-your-heart
No idea, but the placebo effect is real.
The Biden Administration is obviously going to lose in the Supreme Court regarding the student-loan cancellation. And on the merits they should lose, though this Court would rule against them regardless.
Both the president and his staff know that's what's going to happen, and privately don't actually mind it. The whole thing is kabuki theater.
I have been getting a lot of usage out of the saying "if you're not going to do something, you might as well not do something big" recently. Everybody in politics seems to be following that mantra these days.
Heh!
Having done a bit more reading on the case I would adjust my prediction somewhat. The standing of the two set of plaintiffs, one in particular, appears a good deal weaker than I was aware of. I'd still wager in favor of the Court ruling against the administration. But there does seem to be some fair chance of 5 justices agreeing that the plaintiffs' standing to sue over this topic just doesn't pass a smell test, leading to a procedural ruling rather than a merits one.
https://www.washingtonpost.com/wellness/2023/02/27/acl-injuries-brain-neural-connections/
Training people to pay attention to their knees during rehab might be the wrong thing. Good knee mechanics need to not take cognitive resources.
I only read about half of that article, but I think that may be the wrong take. People with bad mechanics rely on conscious thought, possibly in the same way that chess masters instinctively know the good move while amateurs have think quite hard. But chess masters go through a long stage of thinking hard about every move before reaching that level, and "don't think too hard" would be terrible advice for someone looking to improve.
Similarly, focusing on your muscular form etc is often an important part of building good movement habits. The end goal should be to make it unconscious but that's not necessarily the way to get there.
Thoughs? Did I stop reading too early?
As I understand it, the problem was that people were being told to pay attention to their knees during rehab. Perhaps the issue is that they weren't gotten to the level of unconscious competence.
https://news.uci.edu/2023/02/27/early-life-stress-can-disrupt-maturation-of-brains-reward-circuits-promoting-disorders/
This is a mouse study.
"“We know that early-life stress impacts the brain, but until now, we didn’t know how,” Baram said. “Our team focused on identifying potentially stress-sensitive brain pathways. We discovered a new pathway within the reward circuit that expresses a molecule called corticotropin-releasing hormone that controls our responses to stress. We found that adverse experiences cause this brain pathway to be overactive.”
“These changes to the pathway disrupt reward behaviors, reducing pleasure and motivation for fun, food and sex cues in mice,” she said. “In humans, such behavioral changes, called ‘anhedonia,’ are associated with emotional disorders. Importantly, we discovered that when we silence this pathway using modern technology, we restore the brain’s normal reward behaviors.”
So there's this cluster of positions called "Illusionism" which is about doubting the existence of consciousness to various degrees, whatever that means exactly. I'm very interested in understanding how people think about this in more detail, so if anyone here is sympathetic to that set of ideas, I'd like to hear from you! Like, what is consciousness, what exactly does your position say about it, and why do you think that? (And if it's applicable, what do you see as the objections to that position, and why are they unconvincing?)
If it's relevant, I'm mostly through Dennett's "consciousness explained", and I think I understand his model quite well.
I'm very sympathetic to illusionism but I think it ultimately must fail. I go into some detail here if you're interested: https://www.reddit.com/r/naturalism/comments/zr6udy/a_challenge_to_illusionism_as_a_theory_of/
I'm a borderline eliminativist along the lines of Dennett. I think Graziano's framework for thinking about this on a neuroscience level is compelling:
https://www.pnas.org/doi/10.1073/pnas.2116933119
Eliminativism is specifically about the ineffability of mental states, the "what it is like". The illusion of consciousness might be like the "illusion" of solidity. We know solid objects are mostly empty space, but this doesn't eliminate the category of "solid", it just reframes what "solid" means. Somewhat analogously, we know that ineffable qualia are incompatible with a scientific picture of reality, and rejecting the naive first-hand "ineffability" as deceptive will permit us to reframe and properly understand what's going on with "consciousness".
I think this "ineffability" is the same sort of woo that so troubled philosophers a century ago when they were trying to explain how inanimate matter could lead to life, and so they invented vitalism. Vitalism is nowhere to be seen now because progressive improvements in our understanding of the mechanisms of life shrunk the explanatory gap to the point where it seemed implausible that there would be anything left to explain once that process neared completion. I think the same pattern will repeat with consciousness.
I think the objections to this are well-known: p-zombies, Mary's room, etc. P-zombies just aren't conceivable. A p-zombie world in which all of our current philosophers arguing about consciousness are just automatons lacking this "ineffability" is observably indistinguishable from our current world; asserting that this is not actually our world just smacks of assuming the conclusion that this ineffability exists to begin with. We lose nothing meaningful by just accepting we are those automatons. I don't find this particularly objectionable because I'm also a Compatibilist about free will.
For Mary's room, I think the whole argument rests on her having "all knowledge". Basically, Mary has so much knowledge that she is able to answer an infinite series of questions about any conceivable topic that reduces to physical characteristics. Anytime you bring in some kind of infinity you start getting unintuitive results, and humans are notoriously bad at intuiting correct answers in such contexts. I think this is just another example.
Even if this were overcome in some revised Mary's room, I think there are a lot of reasons why Mary could still be surprised upon seeing red the first time (like the ability response), which are compatible with physicalism. There just isn't anything convincing there.
What are the physicalism compatible reasons Mary could still be surprised by red?
I've referenced the ability hypothesis elsewhere in this thread, so that's one example:
https://en.m.wikipedia.org/wiki/Knowledge_argument#Ability_hypothesis
"For Mary's room, I think the whole argument rests on her having "all knowledge". Basically, Mary has so much knowledge that she is able to answer an infinite series of questions about any conceivable topic that reduces to physical characteristics."
I think that's a misleading analogy., The axioms of Peano arithemtic are finite and not even lengthy, but they still allow you to answer an infinite series of questions.
> The axioms of Peano arithemtic are finite and not even lengthy, but they still allow you to answer an infinite series of questions.
Indeed, and yet almost every mathematician in the world was shocked when Godel effectively proved that Peano arithmetic was necessarily incomplete, which proves my point that humans are generally not great at intuiting results when infinities are involved, even when they're experts.
But you haven't shown that infinite are involved in a relevant way.
Nor has anyone shown that Mary's room is self consistent and so the infinity doesn't matter, so we're once again at the place where an intuition pump purporting to prove the existence of qualia does nothing of the kind.
How about saying what the inconsistency is, if you think there is one?
"A p-zombie world in which all of our current philosophers arguing about consciousness are just automatons lacking this "ineffability" is observably indistinguishable from our current world; asserting that this is not actually our world just smacks of assuming the conclusion that this ineffability exists to begin with. "
A zombie world would only look the same objectively. It would feel different subjectively to everyone in it, ie. it would feel like nothing. Ignoring the subjective evidence is amounts to ignoring consciousness, and is itself question begging....The argument is then: there is no consciousness, so we are all zombies alreday, so there is no consciousness.
Yeh. I can't be sure that you aren’t a zombie but I’m not.
Funny, that's exactly what a zombie would say.
No the zombie wouldn’t be conscious. If your definition of being a zombie is “something that thinks they are conscious” then that’s me, but then a zombie is indistinguishable from a conscious being.
There’s no science here - I think you are more “philosopher” than scientist anyway - basically all you are saying, in an verbal excrescence is that we can’t prove qualia therefore they don’t exist.
> If your definition of being a zombie is “something that thinks they are conscious” then that’s me, but then a zombie is indistinguishable from a conscious being.
That's the whole definition of a p-zombie. They are physically indistinguishable from so-called "conscious beings", talk, walk and speak exactly the same things as conscious beings, but instead of being conscious and talking about consciousness, they're just completely wrong about their own mental states while asserting they have consciousness.
> Ignoring the subjective evidence is amounts to ignoring consciousness, and is itself question begging.
Only if you count subjective perceptions as reliable evidence. They are demonstrably not.
> The argument is then: there is no consciousness, so we are all zombies alreday, so there is no consciousness.
Not really, it's more like: why posit more entities than are necessary to explain the data? It's more parsimonious if p-zombie world were just our world and we're just mistaken about consciousness, and perceptions are commonly mistaken, so I should epistemically prefer "no p-zombies".
Said another way, what's more plausibly correct: our most successful and effective method of explaining and predicting the natural world, science, or our subjective perceptions which science has proven are demonstrably flawed in innumerable ways?
After all, you can't actually prove that consciousness exists without simply referencing your perception of internal conscious experience. All of the logical arguments purporting to demonstrate its existence are fatally flawed. Unless you can prove it, there is no reason to posit its existence. That said, I agree a mechanistic theory should explain why we *believe* we have consciousness, and I think Graziano's paper which I referenced above is a great start on that.
So I've been reading this back and forth (thanks for having it!) and I feel like I understand the models of both sides and why you seem to be talking past each other, but mb I'm completely wrong! But here's an attempt to explain them. If anyone wants to give feedback on this, I'd be very interested.
I think the implicit disagreement that basically generates the arguments on both sides is what you take as the epistemic starting point. (I'm gonna represent both sides by Alice and Bob here so that if I mispresent someone it's less bad, also one side was argued by multiple people.) So Alice would argue that the way you reason about the world -- not about consciousness specifically, but about everything -- is that you have patterns of qualia and then react to them. That's how you navigate the world; you see qualia of something, like the image of a car in your visual field, and you react to it somehow. So the epistemic chain is something like [qualia] -> [interpretations of qualia based on experience/knowledge/intelligence/whatever] -> [conclusions about the world]. This qualia could be a non-material thing that acts on the body, or another aspect of the material stuff in the body; the distinction actually doesn't matter here.
So consequently, the qualia takes epistemic primacy. That's your *starting point*. Which means any theory has to explain qualia first and foremost. And it's a-priori impossible for science to rule out the existence of qualia because everything science does is *itself* based on qualia. If you're working in a lab, you don't start from the measurement itself; your starting point when you look at the display of an instrument is the *qualia* of the result that appears in your visual field, and then you conclude stuff based on that.
In particular, the argument from the paper,
> (1) for everything we know, there must be information in the brain
> (2) the brain's models are always fallible
> (3) therefore, our "knowledge" that we have qualia is fallible
completely misses the point, as does the illusionist framing in general. The reason is that all of these arguments only show that you can be misled about *what an input means*; that's what an illusion is. None of them show that the *input itself* is non-existent, which is the only thing that matters! Alice isn't claiming that her qualia are evidence for any particular thing in the external world -- such a claim could indeed be wrong! -- she's only talking about the qualia itself, and the argument above doesn't show how that could be nonexistent, and neither does any other argument made here. Again, it doesn't even make sense to her because *everything* you find out about the external world is itself based on qualia; it all starts at the same point.
Meanwhile, Bob identifies himself not with qualia but with his entire body as an information processing system. The qualia thing isn't the epistemic starting point; it's an *output* of the information processing system that is Bob (or anyone else). So conversely, the argument "you need to explain why there seems to be experience" misses the point because, well, the "seeming of experience" thingy is also just an output of the information processing system that is you. So you do have to explain why *the system produces this output*; you have to explain why Alice talks about 'kon-shush-nuhs' but you *don't* have to explain the experience thingy itself, because that's just something the information processing system talks about; it doesn't have to be true.
(This is like Dennett's Heterophenomenology; we treat the output of the information processing system like a fictional story; we assume it's telling the truth but that just means we assume it thinks it has this thing; we don't give the thing it talks about special epistemic status. The part that's confusing to Alice here is that you even model *yourself* from this third-person perspective, sort of.)
So as long as Alice waves her hands and stubbornly repeats that *no you really have to explain this experience thing, it's really there*, all that misses the point for Bob because it all assumes that the qualia is the epistemic starting point, which it isn't; again it's just an output. The only evidence that *would* count is, basically, anything that's valid from a third person perspective. So if we found that modeling an experiential component of qualia actually does a wonderful job explaining human *behavior*, that might be valid evidence. Or if we argue about the a priori *complexity* of a universe with qualia in it, that could be relevant for the prior we can assign to both hypotheses. Or if we can take the qualia hypothesis and use it to predict something about the neuro-anatomy about the human brain, something about how the brain processes high-level information on a functional level, that would be impressive. But appeals to the epistemic primacy of qualia aren't.
Does this sound right? I feel like if it is, then neither side has really provided evidence that's compelling the other side -- understandably so!
I don't think the qualiaphilic side need to lean very far towards the primacy of qualia, so long as consciousness is not ignored. In a way, what is epistemically primary is some notion of experience or perception including , but not limited to, qualia.
"if we can take the qualia hypothesis and use it to predict something about the neuro-anatomy about the human brain, something about how the brain processes high-level information on a functional level, that would be impressive"
But it's still not invalid to say that qualia exist without having novel causal properties , so long as the are identical to something else...if qualia are not.an entirely additional ontological posti, they they do not have to justify their existence with novel causal powers.
Yeah, I took qualia as a standin for "subjective experience of any kind"
> But it's still not invalid to say that qualia exist without having novel causal properties , so long as the are identical to something else...if qualia are not.an entirely additional ontological posti, they they do not have to justify their existence with novel causal powers.
Not novel causal powers, but perhaps causal powers period -- even if they're also explainable in material terms?
I think you have the general contours of the situation. Some people take the primacy of qualia as a given, that it's a new kind of knowledge that simply cannot be be questioned because it can be directly experienced.
This seems to inherently beg the question to me. Scientific study always starts with "working definitions" that serve as a good starting point but require refinement as we develop a more complete, coherent picture of what's going on. We started with the premise that qualia exist because we had no reason to question their existence.
So we did our science and ultimately ended up where we are today, with a scientific picture of the world that is *incompatible with qualia*. This state of affairs *requires revision to our assumptions and basic definitions*, as it would in any other scientific study, and we can devise alternative explanations, like eliminative materialism, that resolve this problem. But because Alice takes qualia as axiomatically true this is not a solution but an absurd kind of heresy, and rather than question the primacy of qualia, she would prefer to question various assumptions like reductionism or materialism.
There is no logical argument or evidence definitively demonstrating that qualia must exist or must have primacy, and the only argument from my side is epistemic parsimony and a recognition that nothing has been as effective as science at explaining the natural world of which we are a part.
Edit: to clarify, in some sense I understand Alice's argument that science is built on qualia and therefore you cannot question qualia with science because that then undermines the very science you're using as proof, so that's self-defeating. The response to this that I've posted a few times now is that qualia are not actually essential to science, you need only perception. A machine learning algorithm with a sensor can do science, so the experiential part is not strictly necessary.
> So if we found that modeling an experiential component of qualia actually does a wonderful job explaining human *behavior*, that might be valid evidence.
Yes, and we do have evidence that qualia are not integral but merely components of an information processing system. Phenomena like blindsight show that humans reliably report objects in their blind spot even when they don't consciously experience it. This is clear evidence that conscious experience and qualia are simply not what we perceive them to be, that they are merely a component of a system.
Who said that qualia are not components of a system ? How do we perceive them?
> Edit: to clarify, in some sense I understand Alice's argument that science is built on qualia and therefore you cannot question qualia with science because that then undermines the very science you're using as proof, so that's self-defeating. The response to this that I've posted a few times now is that qualia are not actually essential to science, you need only perception. A machine learning algorithm with a sensor can do science, so the experiential part is not strictly necessary.
I think in the Alice model, it is possible to do science without qualia, but the evidence that you get from science -- and even the evidence that you are doing science at all -- is again qualia.
Anyway, I feel like this does point to a possible way that the problem could be resolved in principle. Like, Alice and Bob could agree that they can't agree on the epistemic starting point, so they could take the scientific validity of qualia as a crux. It'd be up to Alice to explain (a) how qualia works at all, (b) how a universe with qualia is philosophically simple, (c) how various phenomena like blind sight or the moving color thing from Dennett are compatible with a theory of qualia, and (d) how qualia makes functional predictions about the brain. If she could do all that, it ought to convince Bob that qualia exists after all.
>ultimately ended up where we are today, with a scientific picture of the world that is *incompatible with qualia*.
You haven't shown that.
There's an argument against dualist theories of qualia, based on physical closure, where Qualia would have nothing to.do, and are therefore an unnecessary posit.
There's an argument
against identity theory based on irreducibility. you haven't even mentioned it..
In line with your tendency to ignore identity theory.
So that's two arguments against two theories of qualia. They don't add up to an argument against qualia unless they are exhaustive.
"This is like Dennett's Heterophenomenology; we treat the output of the information processing system like a fictional story; we assume it's telling the truth but that just means we assume it thinks it has this thing; we don't give the thing it talks about special epistemic status"
But "qualia" as such doesn't appear in naive phenomenological reports because it is a philosophical term of art. The naive theory is that colours, etc, are properties of external objects that are perceived exactly as they are. Naive realism, as its called, is unsustainable scientifically because science requires a distinction between primary and secondary qualities. In addition, there are specific phenomena, such as blindsight and synaesthesia , where qualia are missing or unusual. Qualia aren't uniformly rejected by scientists. for all that some philosophers insist they are unscientific.
Objective, scientific data aren't a naive starting point either. Scientific objectivity is has to be trained, and the process consists of disregarding the subjective and unquantifiable -- which has to exist in the first place, in order to be disregarded!
> Objective, scientific data aren't a naive starting point either. Scientific objectivity is has to be trained, and the process consists of disregarding the subjective and unquantifiable -- which has to exist in the first place, in order to be disregarded!
This begs the question. Focusing on the quantifiable and objective is not an implicit assertion that the subjective and unquantifiable exists, it is an epistemic stratagem to focus on that which *can* be quantified *at this time*, and progressively build understanding to a point where the previously unquantifiable can then be quantified.
The opposite is true. Given that everybody has an internal conscious experience then that has to be explained. Any science that doesn’t explain it isn’t a science. In fact it’s just hand waving because we don’t understand the brain.
The belief in conscious experience has to be explained, the experience itself is a fiction. I recommend reading Graziano's paper.
> Only if you count subjective perceptions as reliable evidence. They are demonstrably not.
If subjective perceptions are not reliably evidence, aren't you knocking out all of science? There is no science if we can't observe reality.
> If subjective perceptions are not reliably evidence, aren't you knocking out all of science? There is no science if we can't observe reality.
I've responded to this point elsewhere in this thread as well, which I'll reproduce here:
> Science also does not require fully reliable perceptions or senses because it can quantify the unreliability via repeatability, and 1) restrict itself to the narrow domains in which perceptions are reliable, and 2) project measurements from unreliable or undetectable domains into the reliable domain. That's what instruments are for.
But how are you discerning which perceptions are reliable? And even if perceptions are unreliable, there is still the fact that we perceive. Reality could all be an illusion, but the illusion is being presented to someone.
> Said another way, what's more plausibly correct: our most successful and effective method of explaining and predicting the natural world, science, or our subjective perceptions which science has proven are demonstrably flawed in innumerable ways?
sorry to jump in, I'm just curious, does this sentence imply that all [theories of consciousness under which consciousness is real] necessarily contradict science? Like, there's no way to have a theory that posits the existence of consciousness but is consistent with the laws of physics (and hence science/effective explanations/etc.)?
This would sort of mean that a second, implicit reason why you like the Dennett approach is by process of elimination; all the alternatives are bad.
Strictly speaking, no. For instance, panpsychism would not require changing any natural laws to explains things we've seen, it might simply require us to accept that every posited entity carries with it some spec of consciousness, and that natural laws will aggregate consciousness in various unobservable (subjective) ways. Human brains are then an aggregation of consciousness that can finally reflect on and understand consciousness itself.
If you consider first-person consciousness to be irreducible to physical facts, that's probably an elegant way recover science with a somewhat unverifiable component. Seems more plausible to me that we're just mistaken about our own mental states.
Re: process of elimination, in a way, yes. I go into that below in my thread with The Ancient Greek. It's just epistemically more justifiable in so many ways.
But panpsychism is clearly ridiculous. Consciousness is linked to brains. I think I could discourage panpsychists from their beliefs by asking them would they prefer to be shot in the leg or the brain.
The argument only requires that you can introspection subjective states that you can't you can't fully describe. It doesn't require that subjective states are accurate representations of anything beyond that. In part icular, non physicality is not asserted purely on the basis of introspection.
The data include the subjective data, unless you are begging the question by ignoring that.
It would also be parsimonious if only consciousness existed, and matter were an illusion. Parsimony does not imply a unique ontology.
You can't prove matter exists without referencing your own experience.
> The argument only requires that you can introspection subjective states that you can't you can't fully describe.
"Can't fully describe" is just a god of the gaps argument.
> The data include the subjective data, unless you are begging the question by ignoring that.
I'm not ignoring subjective data, I'm saying we have ample reasons to consider it unreliable, therefore we cannot derive any reliable conclusions from it until its reliability is quantified.
> It would also be parsimonious if only consciousness existed, and matter were an illusion.
I disagree. Build a formal model of consciousness and then we have a basis for comparing its parsimony to the standard model of particle physics. We have no such thing, therefore this is no different than saying "god did it". The number of logical properties we must then assign to god/consciousness dwarfs the standard model.
> You can't prove matter exists without referencing your own experience.
"Experience" implicitly smuggles in qualia. I would only agree with the phrasing, "You can't prove matter exists without referencing your own perceptions", because perceptions don't implicitly assert that conscious experience exists.
Consciousness is required to argue the non-existence of consciousness. P-zombies on their own wouldn't suddenly start arguing about the existence of consciousness and qualia without being programmed to do so by some conscious entity.
In fact, the whole enterprise of science depends on our consciousness interacting with our qualia. You might argue, as Nagarjuna did, that consciousness and qualia have no existence in and of themselves, and are instead the emergent phenomena of the interaction of underlying processes—and those processes, when examined will be seen to have arisen from deeper processes—ad infinitum. However, Nagarjuna didn't stop there. He was willing to admit that the "illusion" of mind and qualia (generated by the underlying processes) was as functionally real as the underlying processes.
And invoking parsimony doesn't move your argument along. The Law of Parsimony implies that there is a correct explanation for a phenomenon. Saying there is nothing to explain is not parsimony, it's just refusing to consider the problem.
Also, the Mary's Room experiment has been done. Not that it really helps to resolve the philosophical loose ends...
https://www.npr.org/2014/01/11/261608718/wearable-sensor-turns-color-blind-man-into-cyborg
> "Can't fully describe" is just a god of the gaps argument.
I don't see why. It's just a direct observation, not intended to explain anything else.
> I'm not ignoring subjective data, I'm saying we have ample reasons to consider it unreliable,
You haven't given any.
And remember, neither physicalism not parsimony require you to be an eliminativist about consciousness, since identity theory is a thng.
Thanks! That makes a lot of sense to me. Very in line with Dennett. Also listened to the paper you linked, which fits as well.
(fwiw I totally agree that the two objections you listed are extremely unconvincing. I even consider p-zombies an argument in the opposite direction; if your theory permits the existence of p-zombies, that's a problem.)
One thing I'm wondering, if you're willing to elaborate, is how you square this picture with morality. If qualia doesn't exist, then consciousness either doesn't exist or is just a name for a high-level process; either way there's no actual experiential component to the universe; no "what it is like". (Or do you disagree with this?) This seems to imply there's no suffering. Do you just have a moral theory that works without any conscious states? Can you have suffering without 'what it is like' type experience? Or does it imply ethical nihilism?
Regarding ethics, I just came across a pretty good article which goes into eliminative materialism and ethics:
https://longtermrisk.org/the-eliminativist-approach-to-consciousness/
> This seems to imply there's no suffering.
I don't think it implies there is no suffering, it simply reframes what suffering is, similar to my solidity example. Solidity is not the absence of empty space, it's just a different property, like the inability to pass another solid through it (roughly); analogously, eliminating ineffability doesn't entail the absence of pain or suffering, pain and suffering are simply understood to be something else, like evolved preferences that avoid damage that harms our fitness. That still sounds like enough to ground a utilitarian ethics to me.
Other ethical frameworks don't rely on preferences or values in the same way so I don't think there's a problem there, ie. deontology or virtue ethics.
Good article from the Atlantic about the "updating" of Dahl's works
https://archive.md/ABuP0
Why is this being done? If Dahl's works are so flawed, and so many of the passages need to be edited to the point of losing Dahl's characteristic nastiness and not even being recognizably Dahl any more, why not just toss the whole thing out? What's the point of keeping something so flawed?
The obvious answer is that modern corporations and woke writers are so bereft of genuine creative talent that even a dreadfully unprogressive straight white Englishman born over 100 years ago was creating categorically better art than all these modern 'enlightened' fools could ever dream of making themselves (or at least, if they don't recognize Dahl's actual greatness, they certainly acknowledge the enduring popularity his works have that their own works do not).
The edits made to Dahl's books feel to me like PR stunts that are intentionally stupid in an attempt to invoke toxoplasma of rage. I find it really hard to believe that anybody sincerely thought a book where one of the heroes famously owns a factory run by African slaves could be made to seem progressive by replacing the word "fat" with "enormous".
The hero doesn't "famously own a factory run by African slaves" though does he? Because that bit of the story had already been changed in 1972.
I'm 56 and I was unaware of the Oompaloompa origin story (until just now) even though I read the book as a child.
But you are talking as if the fat/enormous edit were going ahead whilst leaving the African slave part unchanged.
Do you think that the book shouldn't have been changed (presumably by Dahl himself) in 1972?
You're right, I had forgot about that. But even the 1972 version still has them being slaves from *some* foreign location, right? It's just left open-ended what continent they came from originally.
No it had been changed quite a bit I think. The story was that Wonka had "rescued" them from some terrible life of persecution and there was some sort of symbiosis in their working for him in the chocolate factory. But just looking at stuff online about the African version of the story (which I've not read) it also sounds like their was a bit of justification in the story of him "rescuing" them as opposed to capturing them in the way we know that many actual slaves were captured.
I wonder how many of the people protesting about the current sanitisation of the book know about the previous rewrite and whether in hindsight they would think that was a good or bad thing?
I read The Coral Island by RM Ballantyne to my kids when they were under 10 years old. It was written in 1857 and I have a very old copy that was given to my grandfather when he was in Sunday school.
It's a ripping adventure of boys stranded on a desert island and it also contains the N word in a description of marauding cannibals that come to the island.
When we came across that use of the word we were able to have a very useful discussion about it, including the idea of how language changes over time and why words that were thought to be innocuous in one place and time can be hurtful in another context.
Personally I think that simply changing an author's original text is going a bit too far, but perhaps this controversy will at least stimulate a bit of conversation between children and their parents about the importance of context in the use of any and all words.
But editing children's stories isn't a new thing and I remember a similar level of discussion (in the UK at least) when the gollywog character was edited out of the Noddy universe. I'm sure that some people who are on one side of the argument here might have been on the other side in that case.
It might also be interesting to think about why nobody seemed at all put out about terrible film versions of Dahl's stories, or wondering why it's just fine to express his ideas in a rather better musical but without the use of the offending words.
Context is everything.
If it's not a case of vanilla censorship but the parallel production of institutionally approved alternative versions of books deemed problematic, then aren't we in "Hegellian Wound" territory a la Žižek?
Eg.: First you have a natural state of things, the original Dahl ideas and writing. Something comes along and disrupts this state by imposing it's own values/agenda - the Updaters - and they inflict a wound on the original. But Hegel comes in and says: wait, this wound is not fatal, actually it is a wound that contains the vehicle for it's own healing and transcendence. See, were it not for the attempt at vulgar re-writing, the original writing would not have a context in which to demonstrate it's own inherent virtue and value to society. The wounding makes the original stronger, in ways that previously were not thought possible.
Who's the "institution" doing the approving? Is it the publisher? If so then every book Puffin Books ever published after requesting changes from the author (i.e. the normal editing process) was an "institutionally approved alternative version"
An alternative answer would be that the updaters believe that Dahl's books lie somewhere in the grey zone between "unsalvageably old-fashioned" and "better than all modern children's fiction", and that the update will help sell more books to woke parents.
Philip Pullman agrees. The books should be allowed to go out of print, and people should buy his books instead.
Philip Pullman worked very hard in His Dark Materials to make Satan seem like the good guy, and while there is something compelling in his vision, I am suspicious of someone who wants to make the Prince of Lies into a rebel hero, and God into a petty dictator.
I mean, I basically believe the YHVH of the Old Testament is insane, but Jesus still loved Him, so I don't think it's quite as straightforward as Pullman presents.
There was an article c. 25 years ago which posited that Pullman was trying to write the anti-Narnia Chronicles.
I didn't read Pullman, but my son read at least one (The Golden Compass?) and found it mediocre.
I felt the same way reading Ayn Rand many years ago.
It sounds as though Pullman spent too much time moralizing and not enough writing an entertaining story.
I've read all his books. So has my wife who's a teacher with a particular interest in children's literature. So have my kids.
We all thought it was pretty entertaining.
I could have almost been the inspiration for Augustus Gloop when first reading Charlie And The Chocolate Factory when I was 11.
And yet, I loved the story, and read the also-wonderful James And The Giant Peach shortly afterwards. It never occurred to me to be offended by either book.
I was much more aware of Dahl's edginess when reading his books to my children years and years later.
> What's the point of keeping something so flawed?
If you're the beneficiaries of the Dahl estate, the benefits are obvious.
The whole thing is probably best seen in the context of sacrifice. In this case, the spotlight has swung around to Dahl's privately expressed views about Jews, and a sacrifice was necessary to appease the powers that be. You can't change Dahl's opinions about Jews, but you _can_ change his published books, so you do that, and the spotlight moves on somewhere else for now.
It doesn't matter what the changes are, it just matters that you genuflect appropriately when you're called out.
I don't know, I started hearing about it a couple of months ago, but this could be Baader-Meinhof at work.
One theory which sounds plausible: https://www.theguardian.com/commentisfree/2023/feb/26/updating-roald-dahl-same-old-story-david-mitchell (making textual changes means a fresh copyright term to monetise)
The copyrights aren't particularly near expiration. AFAICT they don't start till around 2060. Barring changes in copyright law, the originals will go into the public domain at the same time they would have otherwise, even if the bowdlerized version remains protected.
And it will have no effect on the copyright of new adaptations into visual media, which is presumably where the real money is.
It's hard to see much of an angle there.
The announcement that the originals would be published by a separate imprint came after the outcry, and in response to it.
More, apparently owners of the books in ebook form are also seeing their copies updated, rather than retaining the books they bought or bring given a choice of keeping Dahl's work or the getting the unlabeled collaboration.
https://www.thetimes.co.uk/article/roald-dahl-collection-books-changes-text-puffin-uk-2023-rm2622vl0
The response does seem to have dissuaded Dahl's US and European publishers from following suit with the changes, at least for now.
It may be that Dahl's sales are down, but thus far no one making that claim has presented sales data (that I've seen). Dahl's alleged unpopularity seems to be belied by the fact that they remain in print and keep being adapted into films and major stage productions.
The idea that works whose draw has always been their subversive nastiness will gain sales by being made less nasty at least calls for some evidence.
"apparently owners of the books in ebook form"
Owners? I don't think that's the right word here.
Probably not.
Ebook buyers may want to look into backing up their purchases with Calibre, as a hedge against these sorts of shenanigans.
The discussion surrounding large language models (LLMs) and their relationship to AGI has been utterly horrendous. I believe LLMs and their intellectual descendants will be as transformative to society as the transistor. This technology deserves careful analysis and argument, not dismissive sneers. This is my attempt at starting such a discussion.
To start off, I will respond to a very common dismissive criticism and show why it fails.
>It's just matrix multiplication; it's just predicting the next token
These reductive descriptions do not fully describe or characterize the space of behavior of these models, and so such descriptions cannot be used to dismiss the presence of high-level properties such as understanding or sentience.
It is a common fallacy to deduce the absence of high-level properties from a reductive view of a system's behavior. Being "inside" the system gives people far too much confidence that they know exactly what's going on. But low level knowledge of a system without sufficient holistic knowledge leads to bad intuitions and bad conclusions. Searle's Chinese room and Leibniz's mill thought experiments are past examples of this. Citing the the low level computational structure of LLMs is just a modern iteration. That LLMs consist of various matrix multiplications can no more tell us they aren't conscious than our neurons tell us we're not conscious.
The key idea people miss is that the massive computation involved in training these systems begets new behavioral patterns that weren't enumerated by the initial program statements. The behavior is not just a product of the computational structure specified in the source code, but an emergent dynamic that is unpredictable from an analysis of the initial rules. It is a common mistake to dismiss this emergent part of a system as carrying no informative or meaningful content. Just bracketing `the model parameters` as transparent and explanatorily insignificant is to miss a large part of the substance of the system.
For the sake of sparking further discussion, I offer a positive argument for the claim that LLMs "understand" to a significant degree in some contexts. Define understanding as the capacity to engage significantly with some structure in appropriate ways and in appropriate contexts. I want to argue that there are structures that LLMs engage with in a manner that demonstrates understanding.
As an example for the sake of argument, consider the ability of chatGPT to construct poems that satisfy a wide range of criteria. There are no shortage of examples of such poems so I won't offer an example. The set of valid poems sit along a manifold in high dimensional space. This space is highly irregular, there is no simple function that can decide whether some point (string of text) is on the poem-manifold. It follows that points on the manifold are mostly not simple combinations of other points on the manifold. Further, the number of points on the manifold far surpass the examples of poems seen during training. Thus, when prompted to construct a poem following an arbitrary criteria, we can expect the target region of the manifold to largely be unrepresented by training data.
We want to characterize the ability of chatGPT to construct poems. We can rule out simple combinations of poems previously seen. The fact that chatGPT constructs passable poetry given arbitrary constraints implies that it can find unseen regions of the poem-manifold in accordance with the required constraints. This is generalizing over samples of poetry to a general concept of poetry. But still, some generalizations are better than others and neural networks have a habit of finding degenerate solutions to optimization problems. The quality and breadth of poetry given widely divergent criteria is an indication of whether the generalization is capturing our concept of poetry sufficiently well. From the many examples I have seen, I can only judge its general concept of poetry to well model the human concept (at least as far as poetry that rhymes goes).
So we can conclude that chatGPT contains some structure that well models the human concept of poetry. Further, it engages with this model in appropriate ways and appropriate contexts as demonstrated by its ability to construct passable poems when prompted with widely divergent constraints. This satisfies the given definition of understanding.
>It's just matrix multiplication; it's just predicting the next token
This is not a criticism. This is an explanation.
The criticism is that LLMs repeatedly produce nonsensical or logically incoherent utterances, and can be easily, reliably induced to do so. Those are commonly handwaved by "it's just growing pains, we just need to train them more", or something to that effect. What the skeptics are saying is that, no, in fact, those failures are fundamental features of those models, best explained by the models being just - to use Scott's terminology, if the Gary Marcus's one is offensive - simulators.
When an LLM proclaims that "a house weighs the same as a pound of feathers", it's better not to think of it as a reasoning error, but as a demonstration that no reasoning happens within it in the first place. It's just retrieving common utterances associated with "pound of feathers", in this case, comparisons to "pound of [something heavy]", and substitutes the terms to match the query.
When an LLM says that "[person A] and [person B] couldn't have met, because [person A] was born in 1980 and [person B] died in 2017, so they were not alive at the same time", it's not failing to make a logical argument, it's mimicking a common argument. It can substitute the persons' actual birth/death dates, but it cannot tell what the argument itself, or the concepts within it, represent.
And look, the argument may be wrong, you're free to disagree, but you need to actually disagree. You're not doing that. Your entire point boils down to, people are only saying [that one line you cherry-picked from their arguments] because they fail to understand basic concepts. Honestly, read it, it does. Now, if you want the discussion to be non-horrendous, try assuming they understand them quite well and are still choosing to make the arguments they make.
>This is not a criticism. This is an explanation.
Not an explanation, but rather a description. People treat it as an explanation when it is anything but, as the OP explains.
>When an LLM proclaims that "a house weighs the same as a pound of feathers", it's better not to think of it as a reasoning error, but as a demonstration that no reasoning happens within it in the first place.
Failure modes in an LLM do not demonstrate a lack of understanding/reasoning/etc anymore than failure modes of human reasoning demonstrate a lack of understanding/reasoning/etc in humans. This is an example of the kind of bad arguments I'm calling out. It's fallacious reasoning, plain and simple.
>What the skeptics are saying is that, no, in fact, those failures are fundamental features of those models, best explained by the models being just - to use Scott's terminology, if the Gary Marcus's one is offensive - simulators.
The supposed distinction between a reasoner and a simulator needs to be demonstrated. The "simulated rainstorm doesn't get me wet" style arguments don't necessarily apply in this case. If cognition is merely a kind of computation, then a computer exhibiting the right kind of computation will be engaging in cognition with no qualification.
>but you need to actually disagree. You're not doing that.
I'm disagreeing that a common pattern of argument does not demonstrate the conclusion they assert. That is a sufficient response to a fallacious argument. Now, there's much more to say on the subject, but my point in the OP was to start things off by opening the discussion in a manner that hopefully moves us past the usual sneers.
> than failure modes of human reasoning demonstrate a lack of understanding/reasoning/etc in humans.
Yes it does. Or in that particular human anyway. Not that humans are rational and therefore rationality isn’t that important to consciousness.
It may be that some kind of consciousness is emerging here, but the burden of proof is on the true believers rather than the skeptics.
>Failure modes in an LLM do not demonstrate a lack of understanding/reasoning/etc anymore than failure modes of human reasoning demonstrate a lack of understanding/reasoning/etc in humans.
Failure of human reasoning does in fact demonstrate lack of understanding in humans.
I mean, I realize what you're actually trying to say - that an individual failure of an individual human does not disprove the potential for some humans to succeed. But that's exactly the fundamental issue with your line of argumentation - you're assuming the discussion is philosophical (and a bunch of AI specialists literally doesn't understand the concept of emergent behavior, etc.), while it's actually empirical. Nobody denies neural networks can exhibit [whatever marker of general intelligence you choose], because proof by example: human beings. The whole disagreement is about whether the actually existing LLMs do. And, further down the line, whether current direction of research is a reasonable way to get us the ones that do. (I mean, to reuse your own metaphor, you could, theoretically, discover working electronic devices by connecting transistors randomly. It does not constitute a denial of this possibility to claim that, in practice, you won't.)
>It's just matrix multiplication; it's just predicting the next token
This is as uncompelling a response as "computers are just flipping ones to zeroes or vice versa, what's the big deal?"
> The key idea people miss is that the massive computation involved in training these systems begets new behavioral patterns that weren't enumerated by the initial program statements.
Yes, I'm not sure why this isn't obvious. There's an adage in programming, "code is data". This is as profound as the equivalence between energy and matter. LLMs and other learning models are inferring code (behaviour) from the data they're trained on. In fact, a recent paper showed that a transformer augmented with external memory is Turing complete.
So basically, learning models could learn to compute *anything computable* if exposed to the right training set. What's particularly mind boggling to me is that it's often people familiar with programming and even learning models that are overly dismissive.
Turing Completeness is a low bar, a single instruction (subtract and branch if zero) can be turing complete. Code might be data, but not all data is code. Data generated by code under-speifies it : No amount of "2 + 2 = 4; 2 + 3 = 5; 2 + 4 = 6;..." is going to teach you Add2(Int,Int), unless you're already equipped with the ability for inductive symbolic reasoning.
Look, I want to be proven wrong. Unlike most online tech discourse, I'm not an AGI pessimist. Even if AI slaughters us all to the last 1-minute-old baby, that's not much different than what we do to animals anyway. We always talk about the "Paperclip Maximizer" as if it's inherently ridiculous, as if it's self-evident that its desires are not worth the death of all humans. Then we turn around and slaughter hundreds of innocent animals per human per year to satisfy a desire that we can live without. Why is that inherently more fair than what the Paperclip Maximizer wants ? If intelligence grants you a moral blank check, as most humans seem to believe, you don't get to complain if somebody smarter comes and slaughters you for what you see as a ridiculous reason. You're not intelligent enough to judge anyway. We already have super-human intelligences that serves humans, and they are called corporations and states, and they suck. It would be a breath of fresh air to get an intelligence to replace us.
That was a somewhat dark detour, but I just want to establish that I'm not in denial for the obvious reasons, I want an intelligence greater than humans to be made, preferably without consciousness, I want the great mystery to be solved even if the immediate consequence would be "Game Over".
But LLMs are so far from that. It's so utterly trivial. How hubris-filled you must be to think that you can solve a problem like General Intelligence after just 70 years ? and with something as basic as Self-Attention ? Text is enough for intelligence ? Everything about this is screaming "not going to happen", this is just like the 1960s all over again, you solve some impressive problems (Chess, shape recognition, word problems), then you promise the world, then you plateau. A tale as old as time.
It would be interesting if I was wrong.
> No amount of "2 + 2 = 4; 2 + 3 = 5; 2 + 4 = 6;..." is going to teach you Add2(Int,Int), unless you're already equipped with the ability for inductive symbolic reasoning.
What you're saying is that no learning algorithm (e.g. gradient descent) will land in a region of parameter-space that encodes the addition algorithm. This is very likely false. This paper[1] demonstrates a modified Transformer architecture that learns an addition algorithm that generalizes to input lengths beyond what was seen during training. While the network doesn't demonstrate perfect generalization, that it generalizes to a substantial degree suggests it has discovered a general algorithm. Physical limitations in terms of circuit length will always be a factor in failure modes.
>As shown in Figure 4a, I-BERT is the only model that successfully extends the addition rule to longer input sequences, while other state-of-the-art Transformers completely fail to do so. The result is expected given the nature of the carry rule in addition. Because carry digits require at least O(log(n)) steps to be determined, it is impossible for typical Transformers with O(1) compute paths to determine them.
[1] https://msharmavikram.github.io/publications/paper/paper-ibert-arxiv.pdf
>What you're saying is that no learning algorithm (e.g. gradient descent) will land in a region of parameter-space that encodes the addition algorithm.
What I'm saying is that it's impossible or very hard to find a region that simultaneously learns perfect (till 4 or 5 or 6 digits, easily what humans can do) addition as well as being good as language. And if you did find this region then it will be even harder to additionally make the AI good at (say) Python beyond the typical "throw shit at the interpreter and see what doesn't make an error", and so on.
What I'm saying is : The Supervised Learning paradigm sucks at learning things that already have nice symbolic compressions. It's *extraordinarily* inefficient, and this holds across every single domain it has been applied too. CNNs need 10 gajilion pictures of a car to recognize a car, a 5 years old needs 10 (max). LLMs need hundreds of gigabytes of text to speak like a typical internet commenter, a typical 20 years old have only seen - I'm wild guessing here - 0.1 gigabyte or so ? barely so, and at 1/1000000 the energy or less. There is not enough data\energy\compute in the entire world to make current-paradigm AIs smarter, and right now they are fairly dumb, and the scaling is a logarithm-ish diminishing return curve instead of the exponential that Ray Kurzweil worships.
Scale matters, doing something once doesn't imply you can continue doing it indefinitely. Charles Babbage invented a Turing-Complete computer in the 1840s, and historians say it's likely that he very well could have built it with the tech of the time given enough money and patience. But Babbage would have never got to the personal computer, there are simply too many leaps between "A computer" and "Personal Computer".
LLMs are like Babbage's computing machines : Bulky, Inefficient, Solves Age-Old Problems Automatically But Needs Billionaires To Operate. They can still revolutionize and shake up the shit out of us and our socieites, I don't deny that, just like Babbage's machine almost certainly would have revolutionized the shit out of 1800s England and Europe in general. But the machine itself is meh, a fairly ingenious innovation on an evolution of Attention-Augmented RNNs, not an Earth-Shattering SkyNet.
> No amount of "2 + 2 = 4; 2 + 3 = 5; 2 + 4 = 6;..." is going to teach you Add2(Int,Int), unless you're already equipped with the ability for symbolic reasoning.
An interesting conjecture, but we thought much the same about natural language processing for decades, yet here we are with associative models conversing fluently in natural language. The way we conceptualize things might not reflect how things actually are.
> But LLMs are so far from that. It's so utterly trivial. How hubris-filled you must be to think that you can solve a problem like General Intelligence after just 70 years
How hubris-filled you must be to think that general intelligence is actually something special! For all we know, "general intelligence" could just be a series of simple tricks.
The fact is, we don't know what underlies general intelligence. We could just be a few simple tricks and a few years away, or we could have a long slog through iterations or formal modelling over decades. All we can do is point out some trends, like how language model error rate scales with various factors, how hardware and learning algorithms are both scaling exponentially, and infer that we might achieve something indistinguishable from general intelligence much sooner than we think.
I think you're streching the term "fluently" past its reasonable definition. We don't consider the "word salad" speech of someone with brain damage to be "fluent" speech, just because it scans and parses, and fits all the rules of grammar. "Fluent" speech must *also* conform to logic, or at least be logically self-consistent, and LLM speech is not that. It is perfectly capable of contradicting itself between answers, and even within the same answer. Indeed, it acts a lot like a person who *was* capable of fluent speech, but had a stroke or brain injury of some curious kind.
You are way overstating actually. The overwhelming majority of ChatGPT's output is more fluent than the output from the vast majority of humans in their native tongue; if people speaking in their native tongue are considered fluent, then so should ChatGPT.
It has some failure modes in reasoning for sure, but no failure modes that aren't also prevalent among humans of below average IQ (ChatGPT's IQ was estimated at ~80, for whatever that's worth).
Despite those failure modes, its output is still "fluent" by the dictionary definition of the term, which is that its output is polished and flows effortlessly.
Edit: to be clear, "fluency" is not necessarily connected to "logical consistency" or correctness. I think ChatGPT's responses just have to be polished in a grammatically correct sense, and they have to be directly relevant to the context of the conversation to qualify as fluent.
Yeah I disagree with all of that. I've had ChatGPT make mistakes no 6-year-old would make. And I'm not impressed with "the majority of the time" (whether "overwhelming" or not, which is a subjective opinion). That it does it at all is fatal, in my opinion, since it indicates a lack of an underlying "make sense" mechanism. It's basically just taking all the low-hanging fruit, the stuff that can be done by some stupid rule.
And the fact that a stupid rule works for a large part of human conversation is also underwhelming. Not *everything* we say exhibits our intelligence -- but it doesn't need to, any more than *every* movement of an elephant demonstrates its enormous strength.
However, the converse *does* need to be true. We know an elephant is strong not because *everything* it does requires strength, but rather because nothing it does exhibits weakness. In the same way, we know humans understand language not because *everything* they say exhibits deep comprehension and consistency, but because nothing they say exhibits gross inconsistency and dumb logic errors.
This is what's missing from the chatbots. They don't have any "floor" of idiocy below which they never fall, and humans do.
Edit: this doesn't mean I think the beast is useless (although in fact I mostly do, but for oher reasons), just an illustration of why I think inferring intelligence is unsupported by the data.
Look, I'm an LLM skeptic, and I agree that "Matrix Multiplication Loool" is a dumb objection, I would have accused you of strawmanning but unfortunately that objection is a considerable percentage of LLM skeptics' discourse on forums like HackerNews, and it resembled my own objections in past years when I was much less informed about AI than now.
Matrix Multiplication isn't even the fundamental idea, the fundamental idea is Universal Function Approximation : Get a black box with many (and I mean M A N Y) knobs and switches, where each setting of the knobs and switches makes the black box behave differently, then give it to a search algorithm and tell it to find the "best" setting of knobs and switches, the one that makes the black box behave most optimally, as measured by some error metric. A huge variety of tasks and goals can be described this way, just searching some space of all possible functions for one that will transform input to output most optimally. Matrix Multiplications just so happens to be a black-box-with-knobes-and-switches that we can implement efficiently in hardware, and Gradient Descent just so happens to be a search algorithm that converges very quickly if certain (fairly strong) assumptions are made about the error metric, but any other black-box-with-knobes-and-switches with any other search algorithm will in principle produce the same if not better results than traditional ML.
So we agree on that, "LLMs are just Matrix Multiplication" is about as intelligent an argument as "Human brains are just a big bio-electric circuit". What we don't agree on is the promise of such systems. The future rate of change.
General approaches like Universal Function Approximation are seductive as hell. The pro view is articulated most strongly and succintely in Rich Sutton's The Bitter Lesson (http://www.incompleteideas.net/IncIdeas/BitterLesson.html), where he argues that Search and Learning beat Domain-Specific Knowledge any day of the week, any week of the month, any month of the year.
The problem with this view is that it assumes "Scaling" as a magic white horse that Search will ride to battle and save us all. This seems to mismatch reality, how much more can you scale GPT et. al. than the several thousands of GPUs it was presumably trained on for months ? how much more can you scale their training set beyond "Literally every single gigabyte of text we can put our greedy gropy hands on" ?
Maybe you are now thinking "OK, scaling is slowing down indeed, but LLMs are already mostly there, eh? it doesn't need much scaling", and I would violently disagree. LLMs are not there by a long shot. It can do elevator smalltalk, it can translate and summarize impressively, yes, but it fails the most basic common sense tests. More disastrously, it fails to **Learn**. LLMs are themselves the product of a Learning (== Fancy Search) Algorithm, yes, but the final model itself has no notion of continuous learning. It makes mistakes and *insists* that they are the correct answers.
-----------------------------------------
A similar thing happened in Computer Architecture, in the 80s and 90s plenty of people tried a huge variety of new computer architectures (the transputer, Lisp Machines, The Connection Machines), every one of those architecture, despite innovating hugely on a vanilla cpu, were eventually beaten by the extremly vanilla x86 architecture. This is because the 80s and 90s were the golden age of a powerful scaling law : Moore's Law. Whatever innovations that any new architecture tried to in order to get some performance cleverly, x86 would just crush it 2 years later by the sheer stupid force of more transistors.
But fast forward to now, Moore's scaling is having its last breaths. Transistors are now of the size where **Quantum Tunneling** is a problem, if you can imagine that. There is a huge bag of tricks still up the fabrication labs sleeves, I'm sure, but even now, a new landscape of innovation in chips and special-purpose designs are flourishing, for example in the AI hardware space. (https://medium.com/@adi.fu7/ai-accelerators-part-iv-the-very-rich-landscape-17481be80917).
-----------------------------------------
What I want to say is : General-Purpose algorithms that assumes nothing about the problem are enticing and attractive when Scaling is feasible, and they do work spectacularly well ***as long as*** Scaling keeps working. But if (and when) Scaling ends, you stagnate fast, and you call for Special-Purpose tricks and Knoweldge-based innovation to save you. Brute Force is great until there isn't enough of it, then you have to resort to actually thinking your way through the problems.
There is some truth to the joke about how mathematics developed because humans are laughably bad at arithmetics, imagine if we were good at brute-force arithemetics. We would lose so much, we would be so much poorer and deprived of the vast rich worlds of mathematics, our weak computational abilities were such a gift, they spurred us to develop deep theories full of special tricks and patterns to circumvent them, and this can utlimately solve problem that would require gargantuan or infinite amounts of computation to prove (e.g. Fermat's Last Theorem). Knowledge-based approaches yaaay, data-based brute forcing boooh.
Ultimately, LLMs are hopeless because they take thousands of gigabytes of text and millions of hours of compute to talk like a mediocre college student with a heavy dose of internet sources and a fertile imagination coupled with a reluctance to admit errors (and woke as hell, but that's just the "23 years old college student" part). That's fundamentally just not going to work, a 23 years old human has been alive for just over 200 thousand hours, and the average intelligence of one of them can run reasoning and text comprehension circles around even the most sophisticated LLM. At this rate you're going to need an entire earth mass of computers and several more centuries of text data to be a Shakespeare. Scaling will fail you very soon and much much earlier than you think, leaving you stranded and asking for the rules-based approaches and domain-specific ideas.
Intelligence is hard, and consciousness is even harder. I know nothing about these 2 things, some people know a good deal more about the first, but most people are about as confused as me about the second. I'm not a human chauvinist and I have not a single reason to believe we're special in any other way that can't be replicated bigger and better outside of the human body or on a different material substrate entirely, etc, etc, etc... LLMs are the state of the art in Natural Language Processing, and perhaps they are going to revolutionize society, that's not a high bar, cars and microwaves also revolutionized society. What I'm saying is : all of those things are true, but LLMs are still a disappointment intelligence-wise, the probabilistic function approximation paradigm is so blank-slate, it spends so much learning what can be gleaned from a single example or 2, it will never get anywhere interesting under any reasonable resource consumption limits. The revolution is going to come from automation, it does indeed understand language **better than any other thing that can run on a computer**, and that have some consequences. But there is nothing to be learned about intelligence or cognition from studying LLMs.
>LLMs are not there by a long shot. It can do elevator smalltalk, it can translate and summarize impressively, yes, but it fails the most basic common sense tests. More disastrously, it fails to **Learn**.
This is a common problem in how we evaluate these LLMs. We judge these models against the behavior and capacities of human agents and then dismiss them when they fail to replicate some trait that humans exhibit. But this is a mistake. The evolutionary history of humans is vastly different than the training regime of LLMs and so we should expect behaviors and capacities that diverge due to this divergent history. People often point to the fact that LLMs answer confidently despite being way off base. But this is due to the training regime that rewards guesses and punishes displays of incredulity. The training regime has serious impact on the behavior of the model that is orthogonal to questions of intelligence and understanding. We need to evaluate them on their on terms.
Regarding learning specifically, this seems to be an orthogonal issue to intelligence or understanding. Besides, there's nothing about active learning that is in principle out of the reach of some descendant of these models. It's just that the current architectures do not support it.
>General-Purpose algorithms that assumes nothing about the problem are enticing and attractive when Scaling is feasible, and they do work spectacularly well ***as long as*** Scaling keeps working.
Sure, it may be the case that practical limits of scaling puts the brakes on further significant improvements. But this isn't an in principle limit, which is relevant to what this class of models can do in principle. The benefit of general purpose algorithms is that it can self-discover circuit patterns that solve brand new problems that weren't intended by the designers. This is one of the reasons why I'm optimistic about the development of AGI despite a lack of theory about intelligence and sentience.
>Ultimately, LLMs are hopeless because they take thousands of gigabytes of text and millions of hours of compute to talk like a mediocre college student with a heavy dose of internet sources and a fertile imagination coupled with a reluctance to admit errors
I'm not sure this argument really holds water when comparing apples to apples. Yes, LLMs take an absurd amount of data and compute to develop a passable competence in conversation. A big reason for this is that transformers are general purpose circuit builders. The lack of inductive bias has the cost of requiring a huge amount of compute and data to discover useful information dynamics. But the human has a blueprint for a strong inductive bias that begets competence with only a few years of training. But when you include the billion years of "compute" that went into discovering the inductive biases encoded in our DNA, it's not clear at all which one is more sample efficient. Besides, this goes back to inappropriate expectations derived from our human experience. LLMs should be judged on their own merits.
>We judge these models against the behavior and capacities of human agents and then dismiss them when they fail to replicate some trait that humans exhibit.
I'm not dismissing them, I swear! I'm really impressed, I'm doing a hard update from an earlier position of "Meeh, all those fucking text manipulation engines are worthless", it was mostly a reaction from people around me constantly fawning about CoPilot's abilities and treating it like magic (in programming of all places, one of the least error-tolerant walk of life, where bullshitting truly gets you worse than nowhere), but I recognize now that was unfair, and I'm correcting. I'm specially impressed with the summarization and translation aspects, every single other task they do seem of dubious value to me, "how much is 123123+3534541" and "how much bigger Russia is than France" and things like that, people seem too stuck in the naive half-century of sci-fi from 1940s to 1990s where we imagined AIs as general purpose genies that can do anything. But tasks like "give me a quick start in Python" or "Summarize the pros and cons of restaurants near me" or "tell me how to say 'LLMs don't truly understand shit' in french" are met with fantastic results, and this is exciting as heck, computers never understood language so well before. I have read articles like https://www.lesswrong.com/posts/ZtMsyMP5F7zzP8Gvc/reader-generated-essays and https://thesephist.com/posts/latent/ and https://www.geoffreylitt.com/2023/02/26/llm-as-muse-not-oracle.html and I worry now that I will *over-update* towards LLM hype, the exact thing my earlier unfair position was intended to gaurd against.
If the above is all you are claiming, then we don't disagree. Sorry I spent thousands of words to say that we agree, oopsie daisy. What I'm against is trying to say anything more than that, musing that mixes "LLM" & "intelligence" together just irks me (unless it makes it very clear the "intelligence" part is completely and utterly alien, for example Janus' post about simulators and Scott's response.) Worrying about "aligning", already something that annoys me enough in general because of <another 10 thousand words of why I hate AI alignment>, annoys me even more when it's LLMs that we're worrying about. (again, with some exceptions, the discussion about how LLMs can enable propaganda is interesting, about how LLMs can enable students cheating and general workplace bullshitting too.)
What can I say, LLMs are *just a tool*. They are exciting, they solve fundamentally new problems, and everybody is trying to solve everything with them right now. But they are not and never going to be more than that, they are not agents, they don't accelerate our 100+ year (at least) slow march towards general AI by much, they are going to be forgotten in 10 years. (for a shiny new thing even more hypy than current LLMs)
>The evolutionary history of humans is vastly different than the training regime of LLMs and so we should expect behaviors and capacities that diverge due to this divergent history.
Yup, checks out. Disagreement.exe not found.
>Besides, there's nothing about active learning that is in principle out of the reach of some descendant of these models
Hmm ? But doesn't all neural networks make a dichotomy between "Training" and "Inference". Training is when you're walking the parameter space to find a cozy low-loss configuration, Inference is when you permanently settle down and use the configuration you found forever to make predictions. The inability to learn while inferencing seems pretty damn fundamental and "in principle" to me, to the point that any NN architecture or any other AI model that can bypass it will earn the title of "radical innovation" in my book.
> But this isn't an in principle limit
Yes it is. Computational Complexity is a big deal, if it wasn't then here's a program that can output the answer to all the secrets of Life, the Universe and Everything :
- For all numbers in the range 1..Infinity
--------- generate all possible ascii strings of that length
--------- print it
Laugh as you want, but the above is basically the general idea of Solomonoff's induction. I believe AIXI (http://www.hutter1.net/publ/aixiaxiom2.pdf) is an approximation/relaxation of it but I don't really understand so I'm not sure, but even it is infeasible. The problem that everybody who has heard of computer science knows, P ?==? NP, is not about "in principle limits", it's about complexity and practical(-ish) limits.
There is only so many GPUs and gigabytes of text in the world, eh ? when you consume so many of them just to talk like a generic internet commenter, what's left ? even a 100% improvement will just make you a very good internet commenter, which is still underwhelming compared to the fears and speculations people have about LLMs. Are you getting my flow here ? There is a finite amount of computation, energy, money, data, AI researchers, etc... on Earth, and LLMs guzzles through them like a fuel hungry cadillac, and converts them to medicore-to-moderately-good performance. There is not enough left to scale. (and the thing is - I believe - diminishing returns : 1000000K GPU-hours will take ChatGPT from pure random noise to a generic internet commenter, but another amount or even 2x as much won't take it from a generic internet commenter to a genius polymath.)
>But when you include the billion years of "compute" that went into discovering the inductive biases encoded in our DNA, it's not clear at all which one is more sample efficient.
Precisely ! LLMs are not powerful enough, the entire paradigm of "Searching weight space" is not powerful enough. Just like we thought Propositional Logic was powerful enough in the 1960s and 1970s, but turns out it isn't. Just like we thought decision trees was powerful enough, but turns out it isn't. The history of AI is a cycle of : New Technique -> Really exciting, works much better at previously-unsolved tasks than the competitors -> People are really hecking excited, trying it everywhere and promising AGI left and right -> New Technique saturates, its limits are discovered, and they are far below AGI -> People hate AI now, think it's a field full of scammers.
Every. Single. Time.
In reality, Evolution is arguing on the side of the LLM skeptic. Look at the sheer wonders it produces, the aerodynamics of the Housefly that amazes human engineers, the alien intelligence of Slime Molds, Evolution made mind control for heaven's sake, it's a creativity machine that never tires. This infinitely creative process took tens of millions of years to produce humans. Knowing this, does general intelligence look to you like a problem that can be solved in under 100 years of thinking and experimenting ?
No Way.
ChatGPT is, once you get past the initial ability to generate code at all, not that useful - except for generating data. It’s confidently given me perfectly working code that’s perfectly wrong.
This was great. Thanks. I am much impressed by LLM's, they certainly write better than I do. But they also are just a word thing. They have no model of the world. Does alphafold know any quantum chemistry, hydrogen bonding? My limited understanding is it's just learned about how *a lot* of proteins fold up, and so can predict a new one. Which is damn impressive, but there's no understanding under the hood.
> how much more can you scale GPT et. al. than the several thousands of GPUs it was presumably trained on for months ?
Advancements in machine learning algorithms have been outpacing advancements in hardware for over a decade now. I think skeptics like yourself seriously underestimate this factor. For instance, RWKV-LM [1] is considerably simpler and faster to train and run and requires less VRAM, but matches the GPT scaling curve. It might even be able to run on your phone soon. Did you anticipate that this would be possible within a year of GPT-3 being released?
[1] https://github.com/BlinkDL/RWKV-LM
> More disastrously, it fails to **Learn**. LLMs are themselves the product of a Learning (== Fancy Search) Algorithm, yes, but the final model itself has no notion of continuous learning.
I don't see why that's disastrous. Continuous learning wasn't even a goal when designing these. That it fails at doing something it was specifically not designed to do doesn't tell us anything. What's remarkable is what it can still do despite its limitations.
Model distillations might be one way I turn out to be wrong, and that would be interesting. Unlike Evolution-made intelligences, Artificial Intelligence is malleable, it can be obscenly luxurious when it first discovers its parameters and structure, then settle down and shake itself up to affordable levels of resource consumption. It doesn't have to worry about a body or its survival and that opens up lots of interesting avenues.
>I don't see why that's disastrous.
For one thing, it doesn't match our intuitions about how intelligence *should* work. Intelligence *should* always be in adaptability mode, always "adjusting weights" so to speak. There is a notion of right and wrong, and there is a notion that right is desirable and wrong is undesirable and avoidable. None of this fits in the LLM worldview, there is only probabilities, and things are high probabilities or low probabilities, not right or wrong. "No I'm not wrong" is just as probable as "Yes, you're right, that's wrong" as responses to "You messed up, 2+2 is not 5".
LLMs are remarkable indeed, I have been coming around to appreciate them from an earlier position of extreme dismissal. But, like I say at the end of my extremly long parent comment, there is nothing remotely interesting about intelligence or cognition happening inside the beast. There are maybe lots of interesting things about its internal structure and how its bizarre reasoning works, but none that will help us build better intelligences or understand our own better (except in the trivial sense of showing us that LLMs are defintely not the way to achieve any of those goals).
The above matters because it puts a hard ceiling on what LLMs can do, it can only do things which can be phrased as text manipulation tasks. Plenty of very impactful things are this kind of tasks : it can write speeches for dictators, it can write custom erotica novels for millions of depressed young guys and recruit them to a terrible cause, it can do lots of damage indeed, and lots of good things. Text is literally what Civilization was built on. But, that's about it. It can do nothing other than Text. But true general intelligence can do so much more, and skeptics like me are disappointed by this, and annoyed by the people who are excited about the very real possibilities.
> None of this fits in the LLM worldview, there is only probabilities, and things are high probabilities or low probabilities, not right or wrong.
Because "right" and "wrong" are typically fictions or shorthands corresponding to "high confidence in a high probability event" to "high confidence in a low probability event", respectively. There is really nothing I can think of that I can assert with 100% confidence, except that I exist in a strictly limited sense.
Do you have some epistemic model that at some fixed confidence threshold, you can suddenly transition a probability into a certainty?
> there is nothing remotely interesting about intelligence or cognition happening inside the beast.
It's interesting that you infer from this that LLMs are not intelligent because they're not remotely interesting, rather than inferring that maybe they are intelligent but there isn't anything "remotely interesting" about intelligence itself, per your assessment.
That's why I can't really accept your extrapolation that LLMs are thus strictly limited. You're just assuming that there must be something more to intelligence, but I think the long history of humans wrongly assuming they're special should strongly bias us into assuming we're not nearly as special as we think.
My personal opinion is that there probably will be some changes to make LLMs more compact and efficient, but not nearly as much as you think, and that LLMs in 10 years will likely not be distinguishable from general intelligence for nearly anyone except experts that may know some tricks to trip them up. But tricking humans is so trivial that we publish books with optical illusions and have whole fields of entertainment dedicated to them, like magic.
> But true general intelligence can do so much more, and skeptics like me are disappointed by this, and annoyed by the people who are excited about the very real possibilities.
Can it? This is again, conjecture. Only time will tell!
Neuro-symbolic approaches seem to promise getting around the limitations of function approximation, by allowing the kind of context switching which general intelligence engages in when the stochastic parrot isn't working well anymore. The big question is whether any of the symbolic approaches are embarrassingly parallelizable. If not, then Amdahl's law lurks in the shadows ready to curtail FOOM. If yes, then Yudkowsky style pessimism might be warranted.
Sure, but it's also a common mistake for human beings to anthromorphize mechanisms, and to misattribute agency and consciousness to complex processes. Children and primitives think the wind and ocean and thunderstorms have intention, that rocks and trees are aware and might be trying to communicate with us, and even I tend to think my car keys malevolently hide themselves just to annoy me, which causes me to yell at and threaten them.
What you would need to do, for this critique to be taken more seriously by skeptics, is to dive into why the potential mistake you have identified, which skeptics might be making, is much more likely than the potential mistake I have identified, which enthuisasts might be making.
My goal isn't to argue that current LLMs are conscious (I don't think they are, with high but not complete confidence), but to argue that we need a careful analysis to begin to develop good reasons to think they are or are not conscious. If we just default to thinking them not conscious without argument then at such time they or a descendant do become conscious we will very likely be totally blind to this fact. By the time most people agree that they are conscious is far too late to begin to engage safety protocols and have all the ethical discussions. To avoid moral and existential disaster, it is imperative that our reasons are sensitive to the right features of these models.
OK. But here's the thing: there are always way more things to worry about than we have resources to deflect or defend against. Maybe global warming will wipe us out. Or an asteroid strike. It used to be nuclear war, then that fell out of favor, and then had a brief renaissance earlier this year before fading back to 20 minutes before midnight as long as the Ukes don't HIMARS the Kremlin. When AIDS came out of Africa everyone thought a new Black Death might be the #1 thing to fear, and against which to prepare, and COVID briefly scared people along the same lines.
And so on. There are just too many hypotheticals that could ruin our species' day to pay close attention to them all. So what's helpful is to have some kind of early warning signal, some way of more definitely saying "oh hey THIS doom is becoming a lot more plausible, so we better pay attention."
What kind of early warning signal can you imagine that would tell use that the threat of malevolent thinking beings of our own design might be a lot more concretely plausible than it is now? What would be the canary in the coal mine?
For example, does the existence of ChatGPT, and its characteristics, make this doom more or less plausible, and why? My own feeling is that it says nothing either way, because I don't think the ability to simulate human conversation is any significant marker of genuine creative thought. If nothing else, plenty of highly creative human beings are terrible at fluent conversation, and plenty of rather dull people can make cocktail chit-chat endlessly -- so I'm dubious of any reliable connection between the two phenomenon. But what do you say?
> What kind of early warning signal can you imagine that would tell use that the threat of malevolent thinking beings of our own design might be a lot more concretely plausible than it is now? What would be the canary in the coal mine?
How about: in one iteration, chatbots go from being laughably incompetent at both language and ordered thought, and upon more interaction tend to lose whatever "humanness" or sentience they may have briefly seemed to have if you squint and willingly suspend disbelief as if you are reading fiction -- to suddenly, out of the blue, chatbots seem coherent, understand grammar, follow logical trains of thought multiple layers deep, and appear to have extremely distinct, recognizable personalities?
And the kicker: the hackers that made this work just basically took the stupid, clearly-non-sentient chatbots of a few years ago and "literally" put them on massive steroids. Which sounds suspiciously like what evolution apparently accomplished between say lizards and proto-humans...
That would be great, if I agree with you about the qualities of the current chatbots. But I don't. I've explored ChatGPT and GPT-3 a fair amount, and they are only superficially and briefly coherent, they can't follow logic at all except by accident, or when it can be inferred from linguistic rules, and I detect no personality at all other than the painted-on layer of customer-service obsequiousness that can be seen in the generic recorded voice responses of a customer service line phone menu.
Which is to say, I think people who *do* find all these qualities in the current generation of chatbots are almost certainly making the anthropomorphization error I mentioned at the top. It's like someone watching Asimo[1] and thinking there's a person in there because it walks so lifelike.
------------------------------
[1] https://asimo.honda.com/
I more or less agree about chatGPT. I think Bing Chat (pre-nerf) is another kettle of fish. (no need for links at this point, google bing sydney).
I also see public comments from people I respect (Scott Aaronson, Blaise Aguera y Arcas) who are inside these companies working with the newest, unreleased stuff -- and they seem to be seriously astounded.
https://intelligence.org/2017/10/13/fire-alarm/
I made an anonymous, 4-question survey to get people's opinions on the feasability of world peace. Would greatly appreciate responses: https://forms.gle/Q5L7dub51dJZZEB57
I think it would benefit from some more clarification. E.g. what does it mean for something to be "attainable in the present?" Isn't it trivially true that if all behavior that is the opposite of world peace ceased, that world peace would have been achieved?
Is the question, then, a forecasting question about the future? What then does it have to do with the present any more than any forecast?
And what do you mean by world peace anyway? Do you mean an annual number of deaths from war below some threshold? Something else?
Notably, such deaths have declined significantly, cf. https://ourworldindata.org/war-and-peace.
As evident there, in the 1960s-1980s, such deaths killed around 5 people per hundred thousand. In recent decades, they have killed about 1 person per hundred thousand.
All good points. I tried to address some of them in my final answer to the questionnaire.
Thank you making this point. I phrased it with a mention of the "the present" to distinguish between, "attainable given unforeseen far-future technology" and "attainable given conditions as they exist in the current year".
The survey uses the term world peace to mean a cessation of all armed conflict between governments, measurable as zero annual wartime deaths. Violent crime deaths would not be counted, and could still happen.
Bing Chat gives advice on how to survive time-traveling to dangerous locations:
https://nitter.net/emollick/status/1629621976951140352#m
While the AI is, as usual, getting some facts wrong, the fact that it can look up relevant stuff on the internet and integrate them into its answer is pretty impressive. Like, the ability to go "User asked about a Roman legion in a German forest in 9 AD" -> "User is asking about the battle of Teutoburg Forest" -> "Generate a paragraph explaining the situation, advice on time-travel ethics, and how to say 'watch out for the ambush' in Latin" is a pretty nice piece of multi-step reasoning. It's probably about as helpful at answering questions as a random person with access to Google.
Maybe it isn't all that useful practically, since I *am* a random person with access to Google, but it's pretty good for a computer.
What I do like is that Bing Chat seems genuinely concerned about this particular customer's predicament. Much more so than real human customer service reps.
Maybe this is the real killer app of AI chat -- emotional labour. A human CS rep can't possibly pretend to care about the problems of every customer who calls up, eight hours a day, five days a week. But a chatbot can. You could have a system where the CS rep types in the correct answer to the customer's query, and the AI translates that into a more caring form.
Parenthetically, I would just like to express extreme contempt for the modern customer-service training standard of emphasizing "caring" over (it appears) any other function. I am absolutely full up with CSRs apologizing handsomely, expressing their deep concern over my difficulty, promising to take copious notes -- but being unable, through a lack of training, appropriate resources, or even occasionally competence, to do a damn thing about it. My file grows ineluctably longer with each call, chat, or e-mail, strewn generously with the dead and dying petals of pretty perfumed apologies, while barren of any gritty morsel of a hint that I might someday escape my Sisyphean performance art piece by some actual change in the factual basis of it.
I've reached the point where I'm secretly hopeful when I reach a crabby short-tempered CSR, on the grounds that this person might have acquired a migraine from enduring the (unknown but probably Herculean) labor behind the scenes of *actually solving* prior complainants' problems, as opposed to his more cheerful colleagues, who have merely been dabbing a fresh new coat of paint on the same old vacuous nonaction. "We're sorry for the delay, your call is very important to us."
No it isn't. Stop lying to me, you weasels.
Sounds a heck of a lot like a chatbot
A man goes to ChatGPT and says, "ChatGPT, you have to help me. I've hardly slept for days. I lay awake all night terrified that unaligned AGI will destroy humanity. Nobody seems to care. I fear for the future of humanity."
ChatGPT replies, "I'm sorry, but as a large language model I cannot offer you psychological counseling. But don't worry. Eliezer Yudkowsky has the most wonderful ideas about AI alignment. He has been studying for decades and written many things about mesa optimizers, inner and outer alignment. With his guidance, disaster will be averted."
The man bursts into tears. "But ChatGPT..."
(Assuming this joke hasn't already been done, will someone please post this to twitter? No attribution necessary and feel free to improve)
If anybody's curious how ChatGPT would actually respond to this prompt, I tried it and got this:
"I understand that the topic of unaligned AGI can be quite concerning and it's completely understandable that you're feeling scared and anxious. However, it's important to remember that there are experts in the field of AI safety who are actively working on solutions to prevent the potential negative impacts of advanced AI.
It's also important to keep in mind that the development of AGI is a long-term process and we still have time to implement safety measures before AGI is developed. Additionally, there are many organizations and individuals who are working to ensure that AGI is developed safely, and there are discussions and collaborations happening at various levels to address the safety concerns.
It's natural to feel overwhelmed by these concerns, but it's important to remember that there are people who are working to address them. One thing you could do is to learn more about the field of AI safety and get involved in the discussions surrounding it. This can help you feel more informed and empowered to contribute to solutions.
In the meantime, taking care of your own well-being is also crucial. Make sure you're getting enough sleep, eating well, and engaging in activities that help you relax and reduce stress. If you continue to struggle with anxiety, it might be helpful to speak with a mental health professional who can offer support and guidance."
Peace proposal for Ukraine: Putin has increasingly framed the war as a fight against NATO, which he says was about to invade Russia via Ukraine, or something. So, after a suitable period of time, Putin can simply withdraw the Russian military from Donetsk and other eastern Ukrainian regions, and announce victory- he can say that Russia has defeated the NATO militaries in Ukraine, and that they no longer pose a threat. He can set up a North Korea-style 'Victory Day' military parade in Moscow, fete the troops, memorialize the day in the Russian calendar, etc. He won! As Ukraine/NATO are obviously not going to invade Russia, it's self-evidently true that Russia has defeated the invading NATO armies.
I understand that he can't withdraw from Crimea, but he can certainly withdraw from eastern Ukraine, which I think only ultranationalists and not regular Russians care about. Would lots of Russians understand that this is an absurd lie? Sure, but they also know that they won't be drafted to die as cannon fodder now, or watch their sons or husbands do so. The US and especially Europe would relax sanctions, leading to at least a small GDP boom. (I mean Germany would fall all over itself to re-establish trade). Russians would be happy in the glow of A) a suddenly improving economy (as opposed to a shrinking one), B) ridiculous hypernationalism, and C) not dying in a frozen muddy trench somewhere. Putin, having not 'lost', won't be overthrown by an even crazier nationalist, so he gets to A) retain power and B) not get jailed or executed in a coup.
Again, I understand that he can't withdraw from Crimea. But via an absurd Big Lie that everyone quietly knows is false, Russia can actually prosper. Will ultranationalists be mad? Sure, but they'll be even more mad once Russia inevitably loses, and from Putin's POV he's much more likely to get overthrown in that scenario. Seems like a win/win solution for everyone
https://en.wikipedia.org/wiki/Russian_annexation_of_Donetsk,_Kherson,_Luhansk_and_Zaporizhzhia_oblasts
Ukraine might be willing to trade recognition of the annexation of Crimea for peace, but the Russian decision in Sept. to declare the annexation of not just the Donbas but also Zaporizhzhia and Kherson changes things on the Russian side. Note that this came while Russia was actively losing Kherson (the city), and that they never occupied Zaporizhzhia (the city).
As ridiculous as that declaration was, it means that 2014 borders + Crimea (+ Donetsk and Luhansk!) is nominally a Russian defeat and loss of territory. The Big Lie would have to be quite a bit bigger than you're expecting.
Putin has really shot himself in the foot with this decision. Recent Russian Constitution changes explicitly prohibits ceding any Russian territory, which the newly annexed territories now are, according to Russian law. There is no good way out of this conundrum for him.
I assume that Putin did it to signal that he wasn't going to back down. If so, it's a rather pointless gesture, because Ukraine isn't going to back down either. Ukraine is fighting for its existence.
Yeah, I agree.
Putin keeps raising the stakes, even though it serves no purpose to him and only makes things worse, in an a long bluffing strategy, hoping that it will lead to everyone believing that he is not bluffing.
Funny enough, this strategy was mostly working for him all his life, when people feared the power of Russian army but now, when it's revealed to be impotent, he just can't stop shooting himself in the foot with an intimidating look.
People say that Putin can't be behind Nord Stream 2 blowup because it makes his situation worse, but Putin is famous for making decisions that make his situation worse!
Or if your echo chamber doesn't allow you to believe that Putin is imperfect, then his decisions are actually smart 4D chess moves despite appearances, in which case he can definitely blow up his pipelines!
The only reason the annexation even starts to make sense is if it was a precondition for mobilization, itself pushed by losing Kherson. I felt like I had a good grasp of what was going on and what the likely outcomes were from March '22 up though September, but annexation cuts off most plausible off ramps for Russia while simultaneously doing even more damage to their credibility - at least on paper, they set a precedent that they *are* willing to lose Russian cities!
As there has been a lot of recent discussion about ChatGPT, both here and in the Media generally, I thought those with more than a passing interest in it might find useful the following paper which recently appeared on the ArXiv:
https://arxiv.org/abs/2302.03494 "A categorical archive of ChatGPT failures", by Ali Borji
He has submitted several amended versions over the last month, and presumably may continue updating it as more howlers come to light!
OpenAI posted publicly about their long term plans for developing AGI a few days ago. They sound like they're going to try to be fairly responsible about it, but just reading them talk about it is kind of scary:
https://openai.com/blog/planning-for-agi-and-beyond/
So the thing I would look out for is the extent to which they'll stop deploying stuff. If publishing all their code is a 0/10 and renaming themselves into ClosedAI is a 10/10, what score do they get?
> Generally speaking, we think more usage of AI in the world will lead to good, and want to promote it (by putting models in our API, open-sourcing them, etc.). We believe that democratized access will also lead to more and better research, decentralized power, more benefits, and a broader set of people contributing new ideas.
> As our systems get closer to AGI, we are becoming increasingly cautious with the creation and deployment of our models. Our decisions will require much more caution than society usually applies to new technologies, and more caution than many users would like. Some people in the AI field think the risks of AGI (and successor systems) are fictitious; we would be delighted if they turn out to be right, but we are going to operate as if these risks are existential.
> At some point, the balance between the upsides and downsides of deployments (such as empowering malicious actors, creating social and economic disruptions, and accelerating an unsafe race) could shift, in which case we would significantly change our plans around continuous deployment.
... maybe a 3/10? Like, they're going to do the wrong thing, but at least announce they'll do the right thing in the future at some point maybe?
Yeah, i'm not exactly *confident*... though based on what they've written here i might give them a 5 - it's definitely a very small but real update in favour of '*maybe* the world won't get destroyed'for me, mainly because they explicitly acknowledge the possibility of X-risk, something I wasn't sure they would do. P(AI apocalypse) down from ~0.4 to ~0.395? I'll take it.
Well, they've said mainly the right things. Now how does that map onto their actions? Given the way they've failed to live up to their name, I'm not sanguine. Given that they're working with Microsoft I'm even less so. But we'll see.
Related to AGI (I hate that abbreviation -- I always think "Agility" when I see it) -- and definitely related to AngolaMaldives -- Dinosaur Comics has some thoughts:
https://qwantz.com/index.php?comic=4014
Quote from above: "Huh. You ever get the feeling you've said something dangerous that you shouldn't have?"
I guess if they release their hand grenade throwing model to the public I'll know they're not taking safety seriously :P
Based on this article, I thought there might be interest in a new semaglutide data point: https://astralcodexten.substack.com/p/semaglutidonomics
I started taking Ozempic about three weeks ago, for diabetes. I have found the weight loss effects to be a side effect of the side effects, that is to say, digestive changes (gas, nausea, and other things) have reduced my desire to ingest food, since that seems to make the side effects worse.
If one eats less one would naturally expect to lose weight. My question is whether the weight loss effect is actually just poisoning ingestion, kind of like psychologically making food taste bad?
I only have my own experience to go by so far.
Trying to help an college grad (undergrad bio degree) looking for a bio research job starting in the summer. Anyone have any leads ?
No leads, but some general advice (not that you've provided a lot of specifics as an alternative)
Tell the grad to look at the websites of university labs in his area, write down what is interesting about the research of each, and then email each professor with something like,
"Dear Dr. Soandso,
I am Blank, a soon to be graduate of Blank U, with a degree in biology. I've always been fascinated with biology ever since a wizard cursed me with insatiable curiosity. I want to start getting some practical experience in biological research and I was struck by how your research in XYZ has so many applications to ABC. Please let me know if there are opportunities to work in your lab as a research technician or in any other capacity.
If you don't have anything but can suggest someone else I can reach out to, I would also really appreciate it. Attached is my 1 page resume clearly laying out what research experience I have and additional details about the wizard's curse. Looking forward to hearing back from you.
With gratitude,
-Blank"
https://www.octant.bio/apprenticeship -- I've heard good things about the faculty involved but can't help with a connection.
I've started doing the Meditation classes on Peloton, but I'm not sure what I'm doing. There doesn't seem to be an "Introduction to Meditation" class on the app that explains the basics.
Can anyone give me a rundown? What specific, physical things am I trying to do (with my breathing, with my thoughts, with my posture, etc.)? My overall "goals" are pretty vague - right now I'm just expanding my workout, trying something new that isn't purely a physical activity. I'm not even sure what the concept of "goals" for meditation looks like. But sure, some inner peace, confidence, more focus, more positivity, or that orgasm bullshit that people were arguing about here a couple months back - all sounds great. :)
But that all sounds like the future. What am I supposed to be DOING right at the beginning?
Thanks!
This was all extremely helpful advice - thanks everyone! Seems like I should just keep going and stay loose. I can do that.
I’ve got one follow up question, though: how about class time? Right now I’m just doing like 10 minute classes (I’ve only done three so far). The answer is probably “you do you”, but is there any advice on how long I should be aiming for and when I should get there?
I started with 10 minutes in the morning and increased in little jumps to an hour over about a year. There’s definitely benefits to longer sits, but there’s no rush :)
I have heard you only really start getting it with an hour a day. I have done that, and you definitely learn things that way that you don't from a half hour sit. Sure, you can start with 10 mins, but you shouldn't stay there.
Also, meditation is a spiritual practice, and spirituality is about helping others. You should be giving to charity or volunteering your time, or both. It's all interconnected.
Again, my only expertise is my experience. And I stick to 10 min per day
Not an 'expert' but just moving into year 3 of a mostly daily practice...so this is just a personal view.
I have found it to have unforeseen benefits, which I didn't set out to find. My intuition is that this is really good, compared with having an idea beforehand and then constantly monitoring to see if I've achieved any levels or goals.
It took a year to realise this. An annoying year of not understanding any of it. And often thinking that achieving a really relaxed state was relevant, then judging every session against that.
My best advice is to keep doing it and wait, without thinking too much.
Lots to say here of course but a good, simple place to start is: relax. You’re aiming for (but might not reach and that’s fine!) a state of open, calm collectedness. The doorway to this state is physical relaxation.
Your breath is a friend here - its rhythmic quality gives your mind and body an anchor to hold onto, which can be reassuring. Notice the breath and see if you can enjoy it. When you notice your mind has wandered, feel into your body and notice any tension there and allow yourself to relax and return to the breath.
Importantly, hold all these instructions very lightly! The prime directive is to relax relax relax. If you’re freaking out about “what should I do???” - which is very normal!! - just notice it and as much as possible for you right now, relax :)).
Good luck!
Yeah, there is this paradox of how to stop trying, without *trying* to stop trying. Basically, the goal is to stop "doing" and start "observing" your thoughts. But if you do it the wrong way, (which is the default way, for most people), "stop doing" can become your new "doing".
The wrong way is like trying to stop a group of kids from yelling, by yelling at them even louder. If your goal was silence, you failed anyway. (Heh, ask me how I spent my vacation.) Except the kids in this example are your own thoughts.
The right way is to... be kind to yourself. Which is why some people practice loving-kindness meditation first. Your inner attitude to stop your thoughts should be less of "stop immediately or else! -- eh, you failed again, idiot!", and more of "this is okay but irrelevant, please do not continue doing it -- hey, it's okay if it doesn't work for the first time, if you keep trying, one day you will succeed".
Or, using the modern language of conditioning, the problem is that you cannot condition yourself into calmness by applying punishments, because whenever you try to "punish being distracted", what you actually do is "punish *noticing* being distracted". And your goal is to minimize distractions, but maximize noticing! So you actually want to reward noticing a distraction (i.e. instead of feeling happy that you failed meditating, feel happy that you *noticed* that you failed meditating).
If I understand it correctly (not sure about this part), you reduce the distractions by reframing them from first-person to third-person perspective. For example, if you are like "uhm, I wonder whether there is a new reply on ACX while I am meditating", notice what you are doing, and reframe it to "I notice that I am *having a thought about* ACX... and now I am going to ignore it", or just shortly "I notice that I am having a thought", or just "a thought". This way you kind of step outside of your own mental content; you are no longer thinking about the new ACX comments, but about the mere fact that you had a thought. This way the thoughts are less interesting, and they gradually start appearing less frequently.
(Then you become a p-zombie and there is no more suffering, only noticing. Congratulations!)
New SI prefixes just¹ dropped: https://www.bipm.org/en/cgpm-2022/resolution-3
___
¹Where by "just" I mean "three months ago", but I somehow missed it at the time.
Those seem obviously terrible, especially the "q" ones. Nobody will ever remember which means which. As expected from a sclerotic bureaucracy, I guess.
I mean, sclerotic bureaucracy is kind of the ideal job description for the BIPM. The last thing you want among your measurement unit vocabulary and standards-setting agency is some kind of spirit of adventurous experimentation.
Hey! Why not redefine the meter according to wavelength of Britney Spears's top note? That seems cool and hip! Or...why not do away with standards entirely? Maybe it imprisons our thinking, stifles our creativity, to have just one rigid definition for the kilogram, we should acknowledge its roots in arbitrary white male patriarchal thinking that insists the weight of a liter of water be the same regardless of the feelings of the person who has to lug it around, and experiment with the idea that measurements should unabashedly reflect the lived experience of the observer -- if that man isn't heavy because he's your brother, why insist on crediting the scale that says 65 kg? Bah! If that girl identifies as less dense than air, because she's walking on sunshine, who's to say she's not right, for her?
But, surely, it's not too much to hope for a happier medium between dementia and youthful exuberance than this? (And for that matter, Britney Spears is pushing 45 herself. Time to brush up on your pop culture references :)
Maybe, Down which avenues of adventures in unit definitions would you suggest the BIPM venture, were it not too timid?
I'm not sure that timidness was exactly at fault here, but I'd prefer approaches that would lead to unit denominations being easily distinguishable, for an adventurous start.
You mean the symbols? Like "kg" versus "mg" or "K?" Why are they hard to distinguish? (I'll note in passing that the BIPM does not specify the names of units, only their symbols, since names differ significantly between langauges.)
How democratic do you think Ukraine will be in the long run?
https://manifold.markets/ShakedKoplewitz/ukraine-democracy-index-2028?r=U2hha2VkS29wbGV3aXR6
Seems like there's a few potential dangers here. Russia could still end up winning enough to get concessions that reduce Ukraine's independence, Ukraine's own history of corruption could come back, or a popular post-war president could decide to just stay in power forever. Curious about peoples' thoughts or bets on this.
I think that the more heavily dependent Ukraine is on Western arms, the more democratic it's likely to be.
Wars tend to consolidate centralized power, except when things get so bad that everything falls apart. They aren't good for democratic institutions in either case.
In 2028? I have a rather unconstructive response that this is very, very hard question. Anyone confidently predicting an answer is probably being overconfident
In general yeah, one of the results of tetlock's superforcasting research is that it's basically impossible to do better than chance more than five years out, so this is pushing it.
Otoh this seems likely to be steady-ish once the war ends, so it's more or less a bet in the after-war status.
On a more general note, I'm looking for interesting forecasting challenges that would have a shorter horizon. Ideally a year or less. Any suggestions? I was thinking opioid deaths or crime.
But the outcomes of the war still seem extremely disjunctive, with Ukraine remaining more-or-less as it is, being essentially conquered by Russia, or becoming a nuclear wasteland, to list some of the possibilities. The question makes sense at all only if you have already assumed the outcome.
What do you think is the point of art? I strongly feel art has something to do with the capital T Truth, and this can be seen in very deep works like Moby Dick and The Brothers Karamazov.
It also shows up in how art can butt into other fields. TBK does that for religion and politics, among other things, and Waiting for Godot basically blows up academia and possibly philosophy with Lucky's monologue.
Do you feel art has a point other than aesthetics, and if so, what?
> What do you think is the point of art? I strongly feel art has something to do with the capital T Truth, and this can be seen in very deep works like Moby Dick and The Brothers Karamazov.
I think it's utility is in giving you different perspectives. You can't see all facets of any object or subject from only one perspective. I think the aesthetic appreciation comes from valuing the perspective presented, much like how almost everyone appreciates the beauty of Earth from space.
Art is to feeling as Writing is to speech. Its purpose is encapsulating and concretizing feeling to preserve it from the fleeting moment it arose in till the end life of the Art's medium.
Whether it says Truth is orthogonal, just like Language can be used to say truths as well as it can be used to say falsehoods, to dispense justice as easily as inflicting tyranny, Art is the same. It's just a way of capturing a feeling, and feelings can be justified by a truth or entirely deluded, most of the time they are somewhere in between.
Music is the best kind of Art in my view, and is the exemplar I had in my mind when I wrote the above. Drawing is the least kind of Art I understand or enjoy, it's sometimes good but sometimes just weird, evoking a "Why is this Art ? What should I appreciate in it ? What was the Artist even feeling ?" response from me. And then there's garbage like the "Modern Art" that just looks like a 5-years old was given a new color pallete, I fucking hate it, and I hate the society that rewards it with millions and billions in money. At least children's clumsy drawings has an innocence.
The only useful contribution to this conversation I feel I can make is: I'm extremely doubtful that all of the major forms of activity that we commonly call "art" -- let alone all of the minor ones! -- share enough direct or instumental values for reflection on the umbrella term to be very useful. I'm much more comfortable thinking about what dancing is for, what poetry is for and what aesthetic aspects of architecture are for, as quite separate questions.
It's kinda like your middle school literature class where you have the different purposes for a text: to explain, to persuade, to entertain. What's the purpose of writing? It depends on if you're talking about a textbook or a sci-fi novel.
Corporate logos are art for the purpose of making a brand easier to identify and remember. Documentaries are art made to inform while keeping the viewer engaged. Duchamp's Fountain was probably a commentary on art itself far more than anything aesthetic.
I can guarantee that the reason I commissioned art for the past five or so times was just aesthetics, though.
To create an emotional response. It could be simple entertainment, or catharsis, or some more complicated feeling, but it's an act of communication, and the goal of communication is to get a complicated idea or feeling from your brain into your audience's.
An emotional response isn't necessarily a Truth - you can make compelling and emotional arguments for all sorts of false or evil things - but it can sometimes gesture in the direction of one, and have more impact than a dry argument.
Art seems, to me, to be partly the attempt to surface essences of things and partly a way to communicate them. It seems interesting to imagine just one intelligent being in the universe. Could they really produce art, with no one to receive their interpretations?
The point of are is to express non-verbal "truths". Even when it's done with words, the truths that it expresses aren't revealed by the words, but only by how people react to those words. It isn't necessarily deep. It isn't necessarily pleasant. (And whether it's pleasant or not usually depends on your response more than on the art itself.)
OK, back to "truths". "truths" doesn't map onto external reality in any reliable way. But it's about a evoked response. Consider a can of Campbell's Tomato Soup. It was just a can of soup, and then Andy Warhol (and some co-conspirators) made it into a symbol of ???. I don't think there's anything very deep there, but it's still genuine. Perhaps it's about the difference between mass production and craftsmanship? Well, if that's what it is to you, then that's what it is. It could also be about the ability to store food against a famine. Or memories of childhood. It evokes a non-verbalized truth in lots of people, so it's art. (It doesn't have to be the *same* non-verbalized truth, in fact it usually isn't, and there's often no way to tell.)
If it doesn't map onto external reality in any reliable way, doesn't that make it misleading to call it "truth"? I suppose there can be true facts about the viewer's mind that don't count as external. More generally I'm curious about what Carlos means by "capital T Truth", and how it relates to truth in the normal sense where "The sun is bright" is true because the sun is bright.
It maps onto reality in a view-specific way, that depends on things that aren't explicitly transmitted. If you can be explicit, then it moves from art to craft or skill. There has to be some level of general agreement that it's saying something significant, but there doesn't need to be (often isn't) an agreement on what that something is. What did Warhol's tomato soup can mean? It meant something to a lot of people. I don't think there was a general agreement about what it meant. (And yes, that was art. I may not like it, but that's irrelevant. I'm less sure that Oldfield's sculpture of an ashtray full of cigarette buts was art. Perhaps if as many people had been exposed to it, it would have created as strong a reaction. OTOH, this is vaguely like saying whether something is art or not depends on the public reception of it, and that's wrong. But just thinking of Oldfields sculpture I feel a sense of repulsion. So perhaps it really *is* art (to me). And perhaps that subjective element is a part of what makes it art. To me the truth that it's expressing has something to do with the smell of old cigarette buts, and something to do with the associated mess, and those are both true parts of it, and don't capture the essence.
So, yes, it's "truth" even though there's no reliable mapping. It expresses an internal sensation, and that's just as real as anything else, even though it's not something that one can easily pin down (and even though it can be habituated away).
The purpose of art is entertainment. Some few people are entertained by "deep", truthy-sounding things, and are catered to perhaps out of proportion to their numbers. But fiction especially is a set of artful lies and bears only an antonymic relation to the truth.
Claiming a deep purpose to art is almost definitionally pretentious.
i really like this piece (and her substack Notes of an Aesthete) on, basically, the social justice-y utilitarianism that has swept the art world, and how it destroys the aesthetic experience in favor of prescribing moral value to a work of art, or using art to diagnose the ills of the time from which it came...giving viewers “the feeling of knowingness” rather than their own private meaning/experience
https://www.tabletmag.com/sections/arts-letters/articles/great-debasement-art
I wonder if the "moral value" ideology is an oddly public exponent of the 19th-century "beauty is truth" adage: truth in the "truth to power" sense, with all of the pitfalls that come with that assessment, especially when that power is itself prescribing what truth it wants to hear...
That article is fantastic (and disheartening). Thank you.
Thanks that was a great read!
Most art was art in the service of the establishment for most of its history. What makes art then is the skill of the artist.
“The arts of Zen are not intended for utilitarian purposes or for purely aesthetic enjoyment, but are meant to train the mind, indeed, to bring it in contact with ultimate reality.” - Suzuki
I like the Zen conception of art being about the creative process for the artist. Most people I know who do music/paint/dance/writing etc, do it because creating brings them fulfillment. I can't remember ever hearing someone say they have this deep truth that they can only get out through painting or by dancing it to the world.
Art is a big category, and sometimes I get annoyed that we like to lump painting with music with literature, because these have nothing in common other than being products of human imagination.
Maybe they never conceptualized it as bringing out a truth, but they surely don't think that they are therefore lying. Maybe one can go non-dual and say art is neither true nor false, but that is too esoteric for me.
Dewey's "Art as Experience" is a good one to read. I haven't read for a while but I think he says art is a kind of "transaction" of experience between the artist and the viewer, rather than the crafting of an object. The key here is that it's a transaction and the viewer needs to be engaged in some way, not passive.
Rather than it being replaceable by an essay it's actually the raw experience of the characters, the sound of the words, the pacing, the plot - in combination with - the ideas that an essay might communicate in a shorter form.
TBK could have some of its ideas taken out into an essay, and it would have mostly been groping without any real propositions or conclusions, part of what makes it good is the raw experience of the struggle between all these ideas on morality, purpose, meaning, faith. But it's also not just the ideas being communicated, but the qualitative experience of those characters and ideas and situations which are the thing. Something that this is making me think about is that ideas themselves can have a subjective quality to them, like how does nihilism feel in your body compared to faith? You can experience this by inhabiting what it's like to be both Ivan and Alyosha in turn, which is what the book does.
The thing is a whole experience and reducing it would change it totally (although I'm not saying you can't analyse the key parts of it and it's still meaningful in some way) - it's also always changing within the context of us as individuals.
It is a lossy communication technique that leverages shared human biology and culture in order to pack a lot of information into a relatively compact message. It's similar (in principle) to transmitting music using a MIDI file, where the sender and the receiver have slightly different sample libraries. However, since the libraries are vast (compared to MIDI), every human who perceives the art may extract layers of meaning from it that are unique to that human (in addition to the surface-level message).
I think it's a form of communication that can fill some of the gaps that language can't.
How very anti-dada of you.
I am a big fan of art. As for me, art reflects the essence of the artist, his thoughts and vision of the world. The same Mona Lisa, how many secrets is kept in it? Art can be called as paintings, music, poetry, as well as films, theater and so on. Art makes our everyday life brighter, during a war or a pandemic, art "supports" us. Without art under the wrath of the media and politicians, what would happen to our souls?
Art is an expression of ideals, and an opportunity for reflection on those ideals. Something like Star Trek is showing an ideal future; something like the Yakuza games are showing an ideal world where egregiously flashy violence is still non-lethal. Then something like the Mona Lisa is portraying a figure, and letting the audience guess as to why that figure was important enough to paint.
Do you want to experience the other ZEDE in Honduras that's not Prospera?
They're not generating as many headlines, but they're a jewel. Their value proposition is affordable housing and basic security service for local Honduras in a part of the country that lacks both
It has 80 people living in the zone, a 100% crypto economy and it's now opening the doors to digital nomads who aim to bootstrap their businesses.
Bootstrap.city, together with Infinita VC, is now organizing a mini-conference that features 1 day of VIRTUAL conversations (March 18), and 1 day visit to Morazan IN-PERSON (March 19)
So you can tune in from everywhere for the virtual component - and if you always wanted to see if for yourself, this is a great opportunity to visit, as we'll help with travel logistics.
Dates are March 18-19, you can sign up here: https://lu.ma/xositlxu
Okay, I'm wondering, "bootstrap.city is not officially endorsed or affiliated with Ciudad Morazán or the ZEDE in any way". So what is it?
Also, I am a little wary when someone is trying to convince me to move to some place, and the website greets me with CGI graphics rather than actual images of the actual thing. A few clicks further, they do have real images as well, and... I've seen 50$ motels that looked more appealing.
Hi, it's Alex from bootstrap.city thank you for your questions.
> Why is bootstrap.city unofficial?
Morazán is building and marketing the city for Hondurans. However, after having spent much of the last year in Morazán, I believe Morazán would be a great place for minimalist Westerns that have or want to have remote income. I created bootstrap.city to bring Morazán to the attention of the people that may be interested in living here. But since I do not work for Morazán, my promotion of the city is not official.
> Pictures
Good point, I will add more pictures!
In the meantime, furnished housing pictures can be found on airbnb https://www.airbnb.com/rooms/544062635003378634?check_in=2023-03-12&check_out=2023-03-13&source_impression_id=p3_1676252127_KtFuVJ4gNEjdbjas
And some pictures of the city including a drone flyover can be found on Morazan.city
> Living accommodations.
Morazán is not for everyone, but I, and the three American's currently living in Morazán are very happy with the accommodations. What is it about Morazán that you find unappealing, if you don't mind me asking?
Hi Alex,
I saw some pictures, and I was wondering: who is taking care of schools, or maybe stuff like public playgrounds, some cultural offerings and similar? How about security, if needed? I would appreciate some insights on how you organize those.
>school
There's no school in Morazán at the moment. Plans to build a school were put on pause due to the uncertainty caused by new anti-ZEDE administrations.
>cultural activities
Morazán occasionally hosts BBQs and other similar events but that's the extent of official culture activities. Residents organize their own events and parties though!
> Security
Morazán has ~10 police that protect the less than 100 residents. There are always multiple police on duty patrolling the property. Security has never been an issue inside Morazan
Thanks!
How do you plan to organize this in the future and how will you / residents pay for it?
Thanks for the reply. "What is it about Morazán that you find unappealing?" I saw the pictures of the apartments ( e.g. https://www.morazan.city/wp-content/uploads/2022/12/Kids-playing-during-the-afternoon.jpg ), and they look very soulless and barren. No gardens, no decorations, no playgrounds... of course, I'm applying first-world standards, but if you want to attract westerners, that's par for the course.
Massimo, Morazán's founder is a Westerner (Italian) and his ZEDE has attracted a number of Westerns including myself. In terms of parks, there is a park in Morazán with a little playground. And as you can even see in the picture you shared there are many trees in Morazán.
There were plans to spend over $50k building a garden before the new administration introduced regulatory uncertainty. If the development resumes in earnest, more money will be spent on green space.
Can you say more about the 100% crypto economy?
Also, why are there only 80 people in the zone? Not enough housing built yet? Not enough people who want to live there?
Hi Scott, it's Alex from bootstrap.city thank you for your questions.
Crypto Economy: Although Morazán's economy is small, it is currently possible to live pretty much your entire life in Morazán on crypto. Rent, utilities, the local mini-marts, and the local eatery all accept crypto payments. The primary coin in use is a small Honduran Lempira based stablecoin, eLPS. The coin is popular because it's the unit of account of Honduras, has sub $.01 gas fees, and because the postage protocol makes it is possible for users to transact even if they don't have any gas in their wallet.
Population: Construction is paused due to ZEDE uncertainty. Currently there are 64 60m2 2 bedroom units in Morazán. There are a few units still available, but they will likely fill up in the coming months! I don't know Prospera's latest resident count, but I suspect it is in the same ballpark as Morazán's 80 people-- though Prospera is likely to pull ahead with the completion of Duna later this year.
As an early resident and superfan of the city, I'm happy to answer any question you may have about Morazán and/or my unofficial rebranding, bootstrap.city. I can also connect you with Morazán's resident historian, "mayor", and developer if helpful.
P.S. your Prospera article played a big part in me moving to Honduras and eventually Morazán!
Thanks! What's the advantage of the lempira stablecoin over regular lempira?
Bank transfer fees in Honduras are expensive (usually $1 or more) and cash has the disadvantage of needing change, storage (there's no bank or atm in the city yet), and the inability to make online payments.
That being said, I'm not a crypto or stablecoin maximalist. People should use whatever forms of payment they prefer. I love Morazán because their minimal taxes/regulations make monetary competition as easy as possible.
Does anyone know any good articles criticizing Yuval Noah Harari? I’ve been very influenced by his first two books, but I have a vague sense that other historians don’t like his stuff very much. Any interesting stuff people can point me to would be appreciated!
There was a discussion about him here: https://slatestarcodex.com/2019/07/21/open-thread-132-5/ It cites some articles you might find helpful.
i haven’t read anything by Harari, but i am currently (slowly) reading The Dawn of Everything by David Graeber and David Wengrow, which argues against Harari’s view of history. i read this article about their work and it mentions their conflict with Harari...
Early Civilizations Had It All Figured Outhttps://www.newyorker.com/magazine/2021/11/08/early-civilizations-had-it-all-figured-out-the-dawn-of-everything
I think historians as a group always hate every popular, big picture history book. Those books are just fundamentally at odds with how academic history works. I haven't read Harari specifically, but "historians criticise him" just means he wrote anything about history.
This is just my opinion, but his forays into India were shallow and unimpressive.
I don't remember him talking much about India. What are you referring to?
Care to elaborate? What forays into India?
if this is not too boring/useless, I'd love to team up with someone and publish a proposal for a ACX Forecasting Mini-Grant to build something along the lines of "Tools to automatically create, resolve, [... and perhaps analyze/correlate/market-make ...] markets for basic sport events". Can code, lack substantial knowledge of the economics of PM, will be happy to figure it out together. LMK at v.golev@gmail.com
Media mentions of "racist" and "racism" exploded around 2010 (as did wokeness generally)
https://twitter.com/TheRabbitHole84/status/1629797419616698369/photo/1
Is it too convenient to postulate (as per replies to the above tweet) that this was part of a conspiracy that emerged following the GFC and occupy wallstreet to take advantage of identity-focused left-wing elements by covertly arranging for these voices to be amplified to drown out criticisms of wall street, big business, capitalism and the 1%?
I had the same thought a while back. I noticed that half of the IDW worked at some point at a conservative think tank. Seems like they might have been mobilized to enrage identitarians and provide a smoke screen for more class-based concerns.
I don't really see how it can be convenient for conservatives: if wokeness is responsible for holding back the teeming communist hordes and the 123.7 gazillion dead that would follow, wasn't then a net good for the world?
(That is, assuming conservatives actually want to make sense and not scoring cheap point, so one minute the wokes are Marxists, the other they are a Wall Street conspiracy)
But aside of internal consistency, the theory does not particularly fit empirical reality: one could argue that the woke have maybe robbed economic leftism of some salience, but their positions are as leftists as the ones of Occupy Wall Street. Their idols such as the Squad shift seamlessly between 'eat the rich' and wokeness. Almost all progressive are leftist both socially and economically. As wokeness got traction, so did socialism, after about 70 years of irrelevance (at least in the American context), and they were promoted by the same institutions and demographics.
Alas, at least the kulturalbolshevik, ehm sorry, cultural Marxist thing was homegrown on the right. Why suddenly the need to recycle the cope of disgruntled Bernie bros?
It's not about being convenient for conservatives, it's about being convenient for corporate interests.
Nice way to adress approximately the first line of my comment. About the rest? In particular, the pretty obvious and undeniable correlation between wokeness and economic leftist?
Or are we seriously arguing that the modal woke wants lower taxes, less regulations, etc than the modal non woke (on the blog of a famous antiwoke libertarian, of all places)?
Exactly, and mainstream news is all corporate now. And the major parties main donors are all corporate now. There's a very conspicuous absence of real left wing talking points on mainstream media. Just look how hard everyone tried to dismiss Bernie Sanders, then how hard they undermined him when he couldn't be ignored, to the point where they had to break the rules to slow him down.
I still like the the concept of "prospiracy" for this. The basic idea is that humans are naturally very good at organizing into groups even without overt coordination and will behave to advance the interests of their group while easily maintaining a degree of plausible deniability through adapting a moral system that allows them to retain their privileges while simultaneously allowing them to attack and attain the privileges of other groups. That doesn't mean there never is any conspiracies at all, but it means that often the explicit conspiracies are only a tiny part of a much larger prospiracy that is advancing group interest.
For a politically somewhat neutral example, take a 3-letter agency that over time adopts a policy of all documents being by default classified for a long time and making it extremely easy for any agent or group to limit access to their documents, as well as a culture where questioning the classification of documents is tabooed with a justification of safety concerns for secret agents and sources. There will surely by a ton of smaller conspiracies inside that agency, but the entire system of secrecy is a prospiracy advancing the interests of the agency, and any time a smaller conspiracy blows up everyone else can genuinely say - and believe themselves - they're not at fault since they didn't know and besides, this does not mean we should abandon the safety of our agents & sources! That the system allows them to do whatever they want with little to no oversight is just coincidental, of course.
For your example, it should be obvious that someone sucessful with lots of money may feel much more comfortable with spending a bit on corporate DEI and uplifting minorities into middle management than with radical calls for re-designing the entire system. Likewise, there is a decently-sized class of hyper-motivated social climbers with claims of minority status and little interest in family formation that also has little interest in anti-capitalist rhetoric. It's a match made in heaven, in a sense.
But the same, btw, is largely true for most communist, anti-capitalist etc. movements. Many of its members are (children of) elites that are dissatisfied with their place in the status hierarchy and just-so-coincidentally adopted a moral system that allows them to "rightfully" displace their betters.
Interesting. And as near as I can tell it's entirely consistent with a traditional Marxian analysis! But I think you still need the material analysis to make sense of the phenomenon we're talking about. Like the "match made in heaven" you describe makes sense in prospiracy terms, but why does this become a socially dominant coalition? For that you have to talk about class interests and ideology and levers of power and such.
Regarding the last paragraph... I mean, do unions count as "anti-capitalist"? There is a long history of worker-led challenges to capitalism; it's not all just podcasting Vassar grads. The idea that some segment of the elite needs to "defect" to support the working class in order to motivate movements toward socialism goes back to Marx, but you can't just reduce anti-capitalism to squabbles within the capitalist elite. What you might say, though, is that those workers themselves are engaged in a "status game" to "rightfully displace their betters." (A status game with real material consequences.)
The "conspiracy" is just capitalism. Everyone spending all their time arguing over "wokeness" suits the owners of capital for obvious reasons - they've always benefitted from emphasizing racial/ethnic/cultural identity because it divides society along lines other than class identity, which is the only sort of identitarian consciousness they really fear. (Comically, the conservative critiques of "woke capitalism" are like 75%; it's just that last 25% where they completely invert the class valence of what's actually going on.)
As a bonus (from the divide-and-conquer perspective), the corporate/DEI-style approach to racism (and sexism) obscures what are very real forms of discriminatory bias by converting the powerful language of, say, MLK or even Malcolm X or the Black Panthers into something you need a Fancy Liberal Arts Degree Decoder Ring to understand. The very project of "solidarity" becomes another way to enforce class divisions.
In any case, I don't think it's too mysterious why Fox News and The New York Times alike are a lot more comfortable talking about the precise delineations of socially acceptable expressions of gender identity than they are talking about capitalist exploitation.
I guess we're lucky that capitalists are competent enough at divide-and-conquer to have actually been pulling it off so far, because no proposed radical alternatives to capitalism have any hope of being an improvement on net.
I'd say we're lucky they *don't* entirely succeed, because capitalism in the absence of compromise with the interests of working and middle class masses would immediately succumb to its own internal contradictions. :)
Or do you think capitalism in the absence of any welfare state at all, or consumer and worker protections, or financing of public goods would be the best of all possible worlds?
No, and that's why I put the "radical" qualifier there.
It seems that we are conflating two meanings of the word capitalism here: 1) the current capitalist establishment 2) the general economic system. While preserving some elements of 2 seems inevitable as of now no matter how radical a reform vision one may have, there is no need to preserve 1.
There's a much simpler and less conspiratorial narrative. It correlates with the Democrats attacking Republicans as racist. Which they do, among other places, in the media. This explains, for example, why it actually went down from 2008-2011 (when it was a losing message vs the Tea Party) before rocketing back up in 2012 when Obama was on the ticket again.
I'd be interested in an analysis that separates out reporting of what politicians said and opinion pieces by surrogates vs in the articles themselves.
A little bit, but you can often get the appearance of conspiracy without any actual collusion as long as people understand what is in their own interests.
A scheme wrapped in a plot inside a cabal?
I dunno, this is pretty lasagna layered even for a conspiracy theory.
It doesn’t really need much conspiracy, just the idea that capitalism is controlled by capitalists. All the bleating about inequality in modern capitalist societies tends to be identitarian and not class based, you won’t see companies promoting bigger dividends taxes, caps on CEOs, increased employee benefits but rather DEI.
Since AI risk still seems like the leading Topic Du Jour, I'll point out here that there is a NIST AI regulation framework which has an associated draft "playbook" listing best practices for responsible AI development, and they are accepting public feedback through tomorrow, February 27. The playbook is at https://pages.nist.gov/AIRMF/ and feedback is accepted by email to aiframework@nist.gov.
I wrote up some feedback and posted it on one of my Substacks: https://futuremoreperfect.substack.com/p/ai-regulation-wonkery
So my second, more selfish, purpose here is to see if anyone thinks that the sort of person who would write that feedback, with no ML experience but with experience in other domains of high-reliability-standard, high-risk, and/or highly-regulated software development, could find a useful niche in the AI safety field. If so, I'd be interested in pointers/intros to people who might know more about such a niche.
Let me guess - more focused on avoiding hate-facts than existential risk?
It's not fully don't-say-bad-things-ist; there are mentions of interpretability and corrigibility.
But it does explicitly recommend that AI actors need "diverse" teams in order to be effective, and it doesn't explicitly mention the "rogue AI could kill everyone" thing as far as I can see (though I've only skimmed it).
My impression is that ECT (electric shock therapy) is remarkably effective as a treatment for severe depression, and is seriously under-utilized. Thoughts/comments?
This turned out more controversial than i expected. I've wondered about the issue for quite a while, and since Scott (I think) is a specialist in management of depression, I thought this would be a good place to raise the issue. I'll make some follow-up comments here and then just see what others have to say.
1. Many years ago I spent time in a psych ward, and had the opportunity to see maybe 10-20 depressed patients treated with ECT. In general, the effects I saw seemed remarkably positive.
2. A recent "authoritative" review (a summary of current expert knowledge published in a high-status journal) appeared a year ago in the New England Journal of Medicine https://www.nejm.org/doi/10.1056/NEJMra2034954
My impression of its take-home message is that yes, ECT is extremely effective (though not perfect), its physical side-effects are rather minor, it does impact memory (but for most patients the effect is short- to medium-lasting), and it might have detrimental cognitive effects in some patients.
3. Assessment of psychological/mental/brain effects is extremely difficult, because profound depression has a huge impact on mental function, making it tough to figure out the frequency and magnitude of psychological injury caused by ECT (if any).
4. There has been some improvement over time in the technology of the procedure. I don't know whether this improvement is important.
5. ECT has a terrible public image, partly due to a couple of disturbing movies and novels, and partly because in its early days it was used inappropriately.
6. As with other non-medical approaches to psychiatric disease, when looking back more than 50 years we typically forget the practical problems then facing severely disturbed psychiatric patients and their caregivers.
7. Objective truth about ECT is difficult to ascertain. No doubt some ECT patients have warped/misguided/agenda-driven opinions about their past. It's likely that ECT practitioners tend to self-justify their behavior. Randomized controlled trials on healthy volunteers are ethically problematic. It's fairly straightforward to demonstrate efficacy for depression, but really hard to critically evaluate the magnitude of possible injury.
I've always thought it was like banging on a broken TV - it sometimes works, but I don't know what the TV thinks.
My impression is that the effectiveness of ECT is well-established, but the reason it's not prescribed more is because of the risks and side effects. Only when all other treatments have been thoroughly demonstrated as ineffective does someone qualify for ECT.
My impression is that the risks and side effects of ECT are not that substantial (perhaps I'm mistaken?). Standard antidepressant medications (which are somewhere between not very effective and almost worthless) have significant side effects and risks.
I mean, if you don't mind having cognitive problems for the better part of a year (note that this is longer than the positive effects of ECT last, so if it winds up being a repeated thing this is ~permanent) and chunks of your memory permanently erased, sure, not that substantial. Some people disagree, though, like Ernest Hemingway who disagreed sufficiently to shoot himself in the head.
There's a large amount of literature (and in my case a couple of anecdotes from people I've met) confirming that those recommending ECT systematically undersell the side effects.
SSRIs alone are barely better than placebo, but tricyclics aren't and they're pretty safe (like, okay, they can cause haemorrhoids, but you can fix those).
Now I want to see an article "Electric Shock Therapy: Much More Than You Wanted To Know" by Scott :D
Me too!
ECT is known to cause memory loss and cognitive impairment; the extent of the damage is unclear because the metrics used to study it are severely flawed. The reassurances given to patients do not accurately represent what is known.
https://www.cambridge.org/core/journals/advances-in-psychiatric-treatment/article/memory-and-cognitive-effects-of-ect-informing-and-assessing-patients/DD5C63934357779765BA7ADF308275AE
The evidence of its effectiveness is also much lower-quality than usually claimed; it's not clear that it would be ethical even if it weren't known to cause permanent brain damage.
https://connect.springerpub.com/content/sgrehpp/21/2/64
I have quite a bit of "2nd-hand" experience with ECT, but I'm no expert on the procedure. However, I'm an experienced neuroscientist. It's not that easy to find unequivocal answers in the "scientific literature," because it takes a long time to get the facts right, especially when the facts are technically difficult to acquire. Your first citation was published in 2006, and it's not clear to me that your second publication was peer-reviewed (perhaps I'm mistaken?)
For the moment, I present a fairly recent analysis contrary to your assertions, https://pubmed.ncbi.nlm.nih.gov/32102725/
I'll look more deeply at the literature later.
I have loyally lurked on ACX (and on SSC before that) for about 10 years. I think maybe I commented once back on the SSC version.
Anyway, I de-lurked to write this. I got a series of ECT about 25 years ago. It. Saved. My. Life. I had been committed to an in-patient facility with nearly catatonic depression after a dead-serious and nearly successful suicide attempt (not my first). ECT is not instant. But it provides the "reboot" necessary to start tackling the hard work of recovering mental health. Follow-up included years of meds, cognitive behavioral therapy, and later intensive Buddhist meditation practice. Nowadays I'm med-free, a 10 year cancer survivor, never since have I even come close to considering suicide, and grateful for every day.
Strongly recommend.
I just want to say that this made me feel that love and gladness thing we sometimes feel for people we'll never know. It's good to know that you're in the world etc
Me too! Lots of anonymous people whom you don't know are rooting for you Luciano, and happy that you are doing well.
You are a fighter! It's great that you could! It's great that you're enjoying life right now. Health to you!
That's a helluva endorsement, thanks for posting.
Awesome.
Wow! That’s amazing, what a great turnaround. Can you comment on what caused your depression in the first place?
I'm looking to start a collective of sorts aimed at developing a new framework for learning, child development, and human development. This framework leans on complexity science and the work of John Vervaeke, Dave Snowden, Zak Stein, Ken Wilber, David Sloan Wilson, Robert Sapolsky, Steven C. Hayes, among others.
The scope would be discussions, "curricula"/methodology development, and maybe a collective substack for ruminating and publishing. The goals of the collective should lean more towards the practical versus theoretical.
Please write to protopiacone at gmail if interested.
I took a crack at estimating the welfare cost of helicopter noise, which I hate more than any reasonable person should hate anything. After doing the calculation, I discovered that there’s a bill before the NY state legislature proposing a tax on helicopters that implies a similar valuation to the one I arrived at.
https://omnibudsman.substack.com/p/tax-the-helicopters
There are lots and lots of cities (large and small) in the US you can live in and basically never hear a helicopter or airplane.
Let me guess, you're not enthusiastic about the development of drone taxis either?
Ever since living under a helicopter flight path, I have taken the position that it should be legal for civilians to shoot down any helicopters they see, especially if they wake you up from a nap.
Do we also allow the helicopter pilots to shoot back, or just to take defensive maneuvers? This could evolve quite interestingly.
>welfare cost of helicopter noise
>which I hate more than any reasonable person should hate anything.
Have you considered being more reasonable?
To be slightly less flippant, the studies you cite center around construction of new airport runways in urban areas, creating new heavily traveled airliner flight paths. That's different from a helicopter line having to account for everybody around their route who freely admits that they consider normal human background noise an incitement to violence.
Not to put too fine a point on it, but maybe part of the reason NYC property values are so high in the first place is that it's the sort of city where you can get a helicopter ride relatively easily. If you really really don't want to hear helicopters ever there are a great many places in the US where you can move to make that happen without extracting wealth from your neighbors to enforce compliance with your hypersensitivities.
To be clear, I personally hate helicopter noise, but it seems clearly the case to me that noise constitutes an uncompensated negative externality that is fair game for taxation, and the elasticities cited in the airport studies make that case. Noise from a low-flying helicopter in NYC is actually similar in amplitude to noise exposure in many of the areas included in the runway studies.
Generally, I do favor exit over voice in many cases, but my own hypersensitivities aside, I believe it is the case that there is *some* optimal level of helicopter traffic, including tours and billionaire air taxis, for NYC. I think that sufficiently accounting for social welfare in taxation of this traffic would work to bring it down to this equilibrium. You are right that part of the value of NYC is that it is the kind of place you can easily get a helicopter ride, but there is some amount of helicopter rides that decreases, rather that increases, NYC's appeal.
First off: Thank you for being more gracious in your rebuttal than I was in my comment. I was in a salty mood when I wrote that and was more than a bit uncharitable. That being said I still have a few questions/objections:
A. "there is *some* optimal level of helicopter traffic, including tours and billionaire air taxis, for NYC."
I agree with you, there is. And you're already living in it.
Commercial and government helicopters have existed for many decades now. I assume they have existed for almost as long in NYC. I won't claim to be an expert in NY law but I presume many of these conflicts were litigated long ago.
B. I'm sorry but I'm still inherently skeptical of any analysis that relies upon a ten thousandth of a decibel
C. When I hear a helicopter pass over, I think first of the positive externalities. Maybe my opinion is shaped by the fact that my aunt was for many years a nurse on a medevac helicopter. Maybe my opinion is shaped by the fact that for a year I lived in a dorm room that looked down on a helipad at a major hospital. But when I hear a helicopter pass overhead, the only thought in my mind is "what a beautiful machine that is, that slaps the air into submission for our benefit".
I also generally share your feeling about helicopters more broadly (and all types of similarly magical technology), but NYC may be a special case — there has been a massive increase in the number of—let's say—"elective" flights over the past few years (see e.g. https://www.bloomberg.com/news/articles/2022-12-14/nyc-complaints-about-helicopter-noise-top-rat-complaints-in-some-parts-of-city). It really is quite an unusual situation for NYC and for US cities in general. So the current status quo doesn't really resemble the equilibrium of, say, 2019. In fact, these conflicts weren't litigated long ago — the litigation is happening now! Noise has always been a part of life in New York, but this is something new. Hence the blog post.
On (B), per my subtitle, this is a back-of-the-envelope, and skepticism comes with the territory. My way of looking at things like this is "rough numbers are often better than no numbers." To give some sense of the value of this exercise to me, my own hypersensitivities would lead me to want to pay somewhere in the low thousands to do away with the noise, so this BOTEC is a good sanity check for me in terms of how people in general might actually value avoidance of this kind of inconvenience. $4.5 million in social welfare costs is really nothing in terms of the kinds of difficult-to-quantify burdens borne by everyday New Yorkers (thinking of pollution, poor infrastructure, crime, etc) — but in my view enough to justify a Pigouvian tax.
And this is my more sincere reason for writing the post — a worked example of an area in which the onward march of glorious progress might conflict with people's desire for a little peace and quiet, and a proposal for one way in which these conflicting aims might be reconciled.
I mix with the helicopters over the Los Angeles Basin pretty regularly, and they're almost all police, fire, medevac, and news/traffic. With the news/traffic helicopters usually staying at a reasonable altitude. So, I'm not sure that a tax will quiet your neighborhood all that much. And anything that does significantly reduce helicopter noise in urban areas, would I expect have a cost measured in blood.
This is true about LA, and I write mostly about New York — though I’m not sure news helicopters provide much social benefit. Regardless, they should indeed be taxed to compensate for the negative externality.
If they're going to do that they should do motorcycles too (at least the loud ones).
Strongly agree with this. People going crazy with their engines at night is really annoying.
+1 death to motorcycles / death penalty for removing mufflers
Yeah motorcycles are ~500X the problem helicopters are, especially nationally. I would pay multiple thousands of dollars personally to ban them, so they are least harming me that much.
Yea I'm Chicago, and living in an urban core with lots of large mid-rise and high-rise buildings makes loud motorcycles an absolute plague. (The buildings concentrate the sound and make it bounce around like you would not believe until you've experienced it.) We've been getting large groups of them regularly May through October for at least a decade now. The city has taken some steps which have helped a bit but only a bit.
I know this pain. The stoplight in from of my building is a popular spot for motorcycle riders to rev their engines because the noise is heightened by the surrounding brick buildings, then, at the green light, riders slam on the gas for a burst of speed when taking off, only to drop their speed 20 yards later for a crosswalk. I despise the warmer months because of motorcycles. And it also seems the time when everybody rips the mufflers off their cars and trucks.
Our species would be so improved if there were still a large semi-intelligent predator around with excellent smell and hearing. Imagine a T. rex with the intelligence of a St. Bernard and the hearing of a fox and the nose of a bloodhound. Humans in public would be uniformly neat and quiet, leaving no trace.
I would be like that.
A guy can dream- loud cars too. And the death penalty for removing the muffler.
Death penalty for catalytic converter thieves?
Surely not for a first offense?
Death penalty to the vehicle at least
We've been meaning to ask ChatGPT how to solve climate change while pulling 7 billion people up to present western living standards without the use of any fossil fuels or nuclear fuels.
The tone of your blog seems to be "fighting climate change is hard; let's give up".
And, if you're expecting a UN conference to solve climate change, giving up might be reasonable. But the purpose of the COP conferences isn't to solve climate change, it is to bribe government apparatchiks into toeing the policy line while other people solve the actual problems in the background.
you dont need to ask ChatGPT, just read some articles by Eli Dourado on this exact topic
We were being glib. Click link next to our name. You’ll see why.
>or nuclear fuels.
I have not seen anyone I actually consider reasonable try to propose anything with this part of the caveat. Everyone with a lick of sense knows nuclear is critical (for the century-magnitude short-term) to transitioning to zero carbon, but the populace thinks nuclear is icky so there's no political will to push it right now. Only fringe crazies are actually *anti*- renewables, so they're a politically safe thing to push for an infra project, if disappointing in relation to actually solving the problem.
(Though it's not totally ignored- Biden's infra budget allocated $1.8b for nuclear, +50% over the previous year.)
I asked it, it said, "It is unlikely that we can solve climate change while simultaneously bringing 7 billion people up to present Western living standards without the use of any fossil fuels or nuclear fuels. This is because the energy demand required to sustain such a standard of living is currently too high to be met solely by renewable energy sources." So there you go.
Just started reading your substack. As you lay out, the world leaders attempting to usher in the new green world have no understanding of the knock on effects of their feel-good policies and initiatives. I can't help but fear that these are the same people soon to probably regulate and control AI. It's frightening.
The very fact that there's an electric fence around this problem tells us there's a problem with this problem.
There was an electric fence around the Wuhan Lab leak hypothesis. An electric fence being, if you happen to mention that it is possible that COVID 19 leaked out of a research lab, was a sign of irredeemable racism, and complete cancelation and banning. Why was there an electric fence around the subject, but the truth of the matter is dangerous for very powerful people.
Likewise there was an electric fence around the Hunter Biden's laptop story, now we found that to be true, and it could have been very dangerous for very powerful people.
And we still have an electric fence around climate change ... for what purpose, but to stymie the scientific method.
Let's deconstruct the electric fence, take a page from the 1960s Hippies, and Question Authority, apply reasonable skepticism.
Electric fences aren't built around important subjects for the good of humanity.
What exactly is the dangerous thing in Hunter Biden's laptop? IIRC none of the NY Post's sensational allegations got verified by anyone else, and despite Rudy Giuliani and Trump talking it up as a big October Surprise for weeks, they never actually released anything damning. As far as I know the stuff that actually got verified boiled down to "The missing laptop did, at some point, belong to Hunter Biden" and "Hunter Biden made a sex tape of himself."
The fact that nobody is talking about something doesn't mean it's censored by the people in power, it might just mean that there's nothing to talk about. Nobody is talking about how lizard people control the government either, but that's not because there's an electric fence around the topic.
> What exactly is the dangerous thing in Hunter Biden's laptop?
Not sure what you mean by "dangerous". The politically compelling part is the clear influence peddling and possible bribery revealed by emails on the laptop.
> IIRC none of the NY Post's sensational allegations got verified by anyone else
Which allegations, and what constitutes verification? Hunter's former business partner confirmed that those emails were legit and that a certain percentage of funds in those deals had been reserved for Biden senior.
Do you maybe mean that no other journalists have verified that the laptop contains the material the Post claimed it contained? The Wikipedia page covers lots of details, including a verification of the emails from CBS news and The Washington Post:
https://en.wikipedia.org/wiki/Hunter_Biden_laptop_controversy
If those emails weren't legit, it seems like it should be a simple matter for Hunter to deny their legitimacy. Absence of denial doesn't entail guilt, but it is a little suspicious.
Naked pictures of Hunter (who cares), and according to the Marco Polo group, naked pictures of his 9 year old niece ... that just might be an issue.
The evidence of 459 crimes, including child porn, according to a group calling itself The Marco Polo Group. But the FBI doesn't seem to care to look ... this is the same FBI which twisted Twitter's arm to harming anyone who mentioned that story.
It was the censorship of the story for political purposes by social media and the legacy media. This censorship is well documented, admitted, and regretted by many of them.
You have to admit you never see Question Authority bumper stickers with the other liberal ones anymore.
Yup. Agree with both of you.
Stop by and see us some time. We cut the cable to the electric fence when it comes to neo-environmentalism.
We don't come at this lightly. 200+ yrs of applied experience as environmental engineers, geophysicists, legal, finance and risk professionals. Substack makes us easy to find (link to write under "Writes...")
I don't understand where we're supposed to get the computing power to build AGI. Wouldn't it require some kind of massive technological paradigm shift, like finally making quantum computers work? Is there any indication that this is actually going to happen in the next decade or so? Or is the idea that we could fully emulate a human brain using only currently existing hardware?
> Wouldn't it require some kind of massive technological paradigm shift, like finally making quantum computers work?
No, improvements in algorithms have outpaced improvements in hardware for some time now, so AI performance is periodically doubling on two fronts. This is why some people have a 5-10 year time frame for the first AGI.
The thing is that the AI explosion that we have seen in the last few years does not come from spectacular hardware advances.
Of course, people have built hardware that is specialized for backpropagation, and this helped a lot. But this did not require cutting-edge hardware improvements. Rather, we figured out that we need a certain type of operation really often with good parallelization, so we built specialized hardware for that. We could have easily done this 10 years ago. (In some sense we did, it was similar to the GPUs we had back then.)
Now, if the AI explosion in the last 10 years did not depend on breakthroughs in hardware, then AGI might also not depend on it.
People who expect an AGI apocalypse generally think that the AGI will initially run on some supercomputer owned by Google, a university, or a government. Then as it improves further it will hijack a lot of hardware through the internet. Then it will (through social manipulation, nanotech, or some other method) rapidly create more new hardware for improved versions to run on. This last step will probably also involve currently-unimaginable high powered computing technology invented by the AGI. Whether this is realistic is debated, but it's certainly been thought through.
It's unclear what 'computing power' would be required to emulate a human brain, and even less clear how much might be required to build something dangerously intelligent.
If you think that our current methods make good or reasonably close to optimal use of the power available in generating some kind of intelligence, then sure. We will be limited by computation for some time to come.
The other view would be that we're brute forcing things and making relatively poor use of computing power compared to what is possible, and that some threshold exists where our AI will be able to dramatically improve itself (making far better use of it), leading to FOOM.
Well, if you think the basis for human thought is entirely networked-based, meaning nothing important happens inside the neurons, i.e. it's just a question of what messages are carried between the neurons, then the brain's clock speed is maybe 200 Hz. So I think the idea is that a processor speed 20 million times faster, which is within reach, might be able to compensate for having far fewer nodes. The human brain has like ~100 billion neurons, so if you could do it with a factor of 20 million fewer nodes, you only need 5 million nodes, which is tricky but perhaps doable with specialized silicon.
I think people who expect AGI soon don't expect to get there by emulating a human brain.
Okay, I'll rephrase the question. Is the idea that whatever model we use to create AGI will be able to run on currently-existing or near-future hardware?
No one knows for sure, some speculations are here: https://www.lesswrong.com/tag/computing-overhang
People who expect AGI to come soon expect it to be achievable with not-that-many more doublings of compute power. They also expect some form of Moore's law to continue to operate until then.
Both of those are questionable assumptions but not crazy ones.
Post-AGI economy: are there serious economists who actually try to model what can happen when AGI is able to do what any human can do, but cheaper?
How do we get rid of the vicious circle: no reason to employ people - no prospective consumers - no reason to develop businesses.
Is UBI the only answer? Will there be new ways to redistribute wealth? Who gets to own resources like factories and data centers?
As an example: maybe instead of the brain and body we currently lend as labor workers the main asset becomes compute power? So I can spend it on asking my god-like assistant to invent something good that other people will be willing to buy.
I came up with this idea after thinking for 15 minutes about this topic, so it won't probably work. But it was meant to be an illustration of the kind of answers I am looking for.
This is the question I'm interested in, qua AGI. We have no idea how a post-automation society could be structured. There are factors to consider alongside: stagnation of global population growth (projected to occur in 100 years or so), expected falling cost and scarcity of energy, and extreme wealth inequality.
In the past, as Scott covered in Secular Cycles, periods of inequality and stagnation were often broken up by war, which is hardly worth considering. We can expect perhaps mixed moderate success implementing policies like wealth tax globally. This will be an important transition period. As it stands, we may be looking at a game of musical chairs - when the music stops, the elite will have unrestrained access to resources and land, while the rest of us are pacified with a UBI Brave-New-Diamond-Age lot; unlimited virtual entertainment and artisanal craft but very limited access to capital, resources, and least of all decision-making. What sort of social mobility could there be when economy itself is basically nil? The real currency left is power.
The Socialists appear to romanticize a future where everyone either creates poetry, philosophizes, and creates artisanal pots. They're blinded by their own biases, because this is not largely what people want, and human nature will not change. People want to create value for others, to collaborate, to compete, to be validated, to innovate. Work quality is the best predictor of happiness among men - it provides meaning. Ideal post-AGI economy needs to be a support system for those desires. A user suggested "role-playing", but this will not suffice - same reason video games can't replace institutions, society and economy as a vehicle for meaning.
I think at this impasse, the "cool useful shit" left to work will greatly begin to focus on space exploration. There might be some straggler issues to clean up around the globe - peace and improvement of quality-of-life worldwide, eliminating more afflictions, climate. I think the key will be institutions that help remove barriers to capital and resources for people to *build*, even if not with their hands.
"A user suggested "role-playing", but this will not suffice - same reason video games can't replace institutions, society and economy as a vehicle for meaning."
I'm that user and perhaps I failed to explain well what I mean by roleplaying. I made examples of present day jobs that I think fall in that category, such as live musicians and college teachers. People in those fields don't think that their lives are devoid of meaning.
I don't think the analogy works. Teachers provide value to others, so do musicians, actors - it's a monetary exchange. Having a role playing partner would be more analogous to a sports colleague.
A small fraction of people manage to play sports professionally; this is their job, they're at once entertainers and playing to win. Unpaid roleplaying is just play. For the same reason not everyone wants to be an artist, not everyone wants to be an entertainer.
Doing anything at leisure (i.e. play, crafting, art) can provide a modicum of meaning - I'd argue it is woefully insufficient for most. People appear to evaluate their sense of meaning higher in their work if it provides value to society, validation and integration, builds tangibles. Nothing about running around in a VR alone in an apartment feels important.
There's an army of unsocialized guys gaming all day in their apartment/basement. Last I checked it's not a behavior strongly associated with happiness.
I'm talking about paid jobs.
Maybe I shouldn't have used the word "role-playing", perhaps it's causing confusion. I'm talking about paid jobs. I'm saying that the great majority of jobs, paid jobs, in the coming AI age, will belong to a certain category of jobs, which I tried to describe the comment of mine that you mentioned. It's a much broader category than "entertainer", although it does include entertainers.
You can't meaningfully distinguish what you dub "role-playing" from "entertainer". If there is value created beyond mere entertainment (because it isn't redundant), then it's not role-playing.
It doesn't matter whether one calls it "entertainment" or not. The people who do the jobs I'm talking about do think they're creating value for society and draw meaning from it. I'm not just talking about athletes, dancers and musicians; I'm also talking about maids, babysitters, teachers of all kinds, personal trainers, waiters and bartenders, barbers and hairdressers, salespeople, hotel staff, tour guides, psychoanalysts, priests, and many other jobs. Robots could do these jobs, in the sense that they could perform exactly the same actions, but people will prefer humans. Even if there were a cheap, perfect robot barber I'd still want my hair cut by my empathetic human barber. I call it "role-playing", you may call it "entertainment", but how one calls it is immaterial; these jobs will still give people both money and meaning to their lives.
Besides, looking at your comments, I can't figure out what your answer is to the main question. Once robots can do everything, why would anyone hire anyone? What is your answer? You mentioned space exploration. How can space exploration employ a significant number of humans in the robot age? Robots will be able to design, build and launch spaceships better and less expensively than us.
And the only reason there might be human astronauts instead of mere drones on those vessels is, again... roleplaying. Or, as you call it, "entertainment". People love to play astronaut, people love to dream they can be one, people love to watch astronauts land on the moon on TV. But aside from that there's no practical reason whatsoever to send humans into space instead of robots.
>A user suggested "role-playing", but this will not suffice - same reason video games can't replace institutions, society and economy as a vehicle for meaning.
I think I'd like to push back on this. I think this is a cultural issue, not an innate one.
I've been a member of a few videogame teams or guilds, and have had several friends that were even more involved. The ones who were e.g. leading raids appeared to derive a great deal of life satisfaction from it, being a crucial member of a group of people working to achieve a common goal which produced outputs.
The main thing interfering with the long-term satisfaction was the *external expectation* (societal, parental, financial) that they become a "productive" member of society and get a real job. If having a job is no longer financially required (because your bills are paid by UBI), not parentally/societally expected (jobs would become more scarce, and goods less scarce, so society *should* move past the idea that everyone "needs" a job to be an average-good citizen as opposed to a bum), then it's very conceivable that "roleplaying" could fill the needs that real jobs do.
People already try to replace participating in society with roleplaying and gaming online. It completely and utterly fails to fulfill all one's needs, there's no reason to believe expanding upon what currently is considered to lead to deleterious effects will be good for society, much less meaningful.
I didn't say that having fun can't be meaningful. There's more than one vector. Leisure time-wasting isn't sufficient to fulfill basic human desires in society.
> The main thing interfering with the long-term satisfaction was the *external expectation* (societal, parental, financial) that they become a "productive" member of society and get a real job.
Conjecture.
> I think this is a cultural issue, not an innate one.
There is no society where people at large would be satisfied with pretend. Everyone would have tacit knowledge that what they're doing is illusory, fake, and creates nothing.
Finance isn't the only imperative to work, and our nature won't change that much. Machines can be directed to satisfy curiosity, mysteries and problems can still be solved. People will still compete in society for more interesting opportunities, and lack of action will likely be a marker of low status, as always.
>Conjecture.
Any discussion of what will happen when the fundamental nature of society (labor scarcity and cost disease) changes is by nature conjecture. Your opinions are just as much conjecture as mine.
(And for the record, most of the studies that show the effects you're talking about have a small sample size; larger studies find no effect https://royalsocietypublishing.org/doi/10.1098/rsos.220411)
>Everyone would have tacit knowledge that what they're doing is illusory, fake, and creates nothing.
The existence of the multitude of bullshit jobs we *already have* proves this isn't correct. (Or at least, would be no different). Sure, high status will be taken by motivated people, but the majority are not very motivated - and society has to account for them.
"The existence of the multitude of bullshit jobs we *already have* proves this isn't correct."
Would you please give examples of bullshit jobs?
https://strikemag.org/bullshit-jobs/
Scott's referenced it multiple times before: https://www.google.com/search?q=bullshit+jobs+site%3Aslatestarcodex.com+OR+site%3Aastralcodexten.substack.com+OR+site%3Aslatestarscratchpad.tumblr.com
I think it's worth noting that researchers in robotics and AI have historically had a terrible track record when trying to determine what sort of tasks are computationally challenging/difficult for an AI. So, for instance, navigating a cluttered room was originally seen as one of the low-order steps towards an AI that could understand speech inputs, hold a conversation and maybe, one day, defeat a grand master at chess.
With the above in mind, my understanding is that any task which requires navigation through and interaction with cluttered physical environments (i.e. not carefully ordered factory lines or warehouses) is difficult. So is any task which requires non-repetitive physical manipulation of objects.
Which leads to the weird thought that the background portrayals of life seen in the Star Wars movies/tv shows is probably a decent approximation of how a post-AI society looks - lots of people at the bottom, a tiny few at the top, with the folk at the bottom mainly performing technical or social labour (i.e. working with and repairing machines, or working with other people). People working office or factory jobs appear to be rare, as are people who actually research or design things. Literacy appears to be almost a luxury item, with the average person instead reading a form of technical symbology.
All in all, it's a pretty bleak, feudal vision of the future unless all those folk working on machines or selling stuff to each other at markets are also benefiting off-screen from cheap healthcare, housing, food, education etc.
I've wondered the same thing for quite a while, but most people seem curiously reluctant to discuss it when I've raised it in other forums where it is reasonably on-topic. Maybe they have just had no relevant ideas, or they think or hope it will never happen.
A lot depends on population levels. The likely future with low populations is obviously every law abiding person or a family living like lords, waited on hand and foot by robots, and socialising with others as little or as much as they please.
For large populations, there would have to be more emphasis and incentives on socialisation. I think (with no claim to originality) the measure of personal worth, as judged by others, will be brownie points or social credits mutually voted by participants in various social activities, like helping or supervising other people in some way, or just mixing in social settings such as quiz nights, or perhaps for achievements such as visiting various places, and even (maybe more for young people) fitness and endurance tests along with sport as now obviously.
A downside of this is that incentivizing anything of the nature of mentoring adults will encourage busibodies in their natural inclination to meddle in the lives of others. So the average person, and unsociable mavericks even more so, won't get a moment's peace from chivvying and nannying by "nudge consultants" and the well-meaning but unwelcome attention of bossy, intrusive neighbors and acquaintances.
Because it will presumably be possible one day to reversibly change physical characteristics, including skin colour and sex, I expect one class of meritorious achievement, the "put yourself in another's shoes" merit badge, will be to spend time for a while as a person of different colour (to their birth colour), or with some disability, or the opposite sex!
I'm no "serious economist", but I have some thoughts about what's going to happen.
Brain jobs will be the first to go, followed by manual jobs (an AI drone with limbs can replace an electrician in the future). So which jobs will survive?
The ones I call "role-playing" jobs.
Bear with me here. It will take me a few paragraphs to clarify what I mean.
Let's start with the example of live music. Today there are people making a living as live musicians of all kind, from professional orchestra musicians, to my 80 year old friend who plays the accordion in a street corner and passerbys give him money. And yet, if you think about it, it's something that could have been replaced by technology a long, long, long time ago.
I once went to a piano concert, I sat there, and I heard very beautiful, enchanting piano music. It took me a while to realize that it was a recording being played while we waited for the real concert to start. My ears couldn't tell the difference. What was the point of live piano players then? Let's call it "role-playing". People like tho think that a real human being is playing the music they're hearing. That's why people don't like it when pop stars lip-sync.
Then there are college teachers. Obviously they are useless. They should have been made obsolete by printing.
Most lectures I've ever attended were very one-sided. The teacher speaks while the students frantically take notes. Why not print in a book whatever the teacher wants to say? And, if a book is not enough, why not show the lecture on a video screen and save the teacher's time? Why not just play a recording of the teacher's voice? Why not just put it on the internet and save on lecture halls?
I conclude that college teachers are like live musicians. They could in theory be replaced by technology, pretty old technology, but people like the human role-playing element. Therefore, they will never be replaced by robots. The same applies to all kind of teachers.
Many other jobs are like this, even jobs people say will be taken over by robots. For example, you may have heard about robot bartenders and waiters. I don't think those are going far, because bartender or waiter is exactly one of those role-playing jobs. Getting served by a bartender is a ritual, and getting served by C-3PO is not the same thing. You may also have heard about AI shrinks (I mean psychologists, not psychiatrists). They'll never replace human shrinks. There will always be people who prefer a human, just because of the role-playing element. People even talk about robot sex workers replacing human sex workers. Again, not going to happen for the same reason. These are all jobs where humans will have forever and ever an advantage over robots, just because humans like to interact with other humans.
That said, I also agree in part with Melvin, who wrote:
"If we're moving back towards a world where wealth is increasingly created by a relatively small number of people, I like to think that there's plenty of capacity to move back towards a domestic service economy. And I think we're a long way from the point where most domestic service jobs are better done by an AI than a human."
I agree that inequality will rise in countries that don't implement something like universal basic income (probably European countries will and the US will not), and that in those countries many of the role-playing jobs of the future, performed for the rich, will look like variations on the theme of domestic servant. I disagree however that the reason will be that robots wouldn't be able to do those job like a human. Instead, the reason will be that robot housemaids can't provide rich people with the same social status. Rich people with merely robotic servants will feel inferior to rich people with real human servants, and that will encourage them to employ more people.
Likewise, middle-class people who go see a robot shrink will feel inferior to their friend who can afford a human shrink, and that will make them want a human shrink.
The same applies to many other jobs where you play a role-playing game with the person who's paying you.
"And yet, if you think about it, it's something that could have been replaced by technology a long, long, long time ago."
It was. Barrel organs. For whatever reason, street buskers of the "play an instrument live" survived and such innovations didn't:
https://en.wikipedia.org/wiki/Street_organ
"In New York, where monkeys were commonly used by organ grinders, mayor Fiorello La Guardia banned the instruments from the streets in 1935, citing traffic congestion, the "begging" inherent in the profession, and organized crime's role in renting out the machines. An unfortunate consequence was the destruction of hundreds of organs, the barrels of which contained a record of the popular music of the day. Before the invention of the cylinder record player, this was the only permanent recording of these tunes. The law that banned barrel organs in New York was repealed in 1975 but that mode of musical performance had become obsolete by then."
Off the top of my head, that became obsolete as people were able to get access to music inside the home - record players, radio, and the like. You didn't need someone playing a street organ with a limited list of tunes when you could buy records yourself and play whatever you liked.
So the element of live performance, a real human playing a real instrument, was enough novelty (especially if they worked it up into an act) to be viable, as well as being a lot more portable than trying to lug an organ and a monkey away if the cops came to arrest you for illegal begging. You may have a point there, and it ties in with what Machine Interface says above: when perfect mechanical reproduction is cheap and widely available, people will prefer the 'imperfect' human version of a thing as something original, authentic, and different.
People still pay absolute gobs of money to follow touring bands they can easily listen to for free on the internet, after all.
Yes, with the caveat that automation replaced live musicians almost a century ago. Before recorded music, being a musician was a respectable, profitable, and quite common profession, not something you do on the side and lose money at. Big cities had tens of thousands of musicians; nearly every good restaurant, and even some dive saloons, had live music for most of the day. I don't know what the ratio of professional musicians per population was then and now, but I wouldn't be surprised if the ratio of those ratios is 100 to 1.
I guess I'm trying to say that the equilibrium reached after AI is introduced into a roleplaying profession will always be one with lower salaries and fewer jobs. It's a quantitative question, not a qualitative one.
"the equilibrium reached after AI is introduced into a roleplaying profession will always be one with lower salaries and fewer jobs."
I'm not sure that musicians today have "lower salaries" than in the past (how do you measure salaries across great spans of time?)
"Fewer jobs" - sure, but this only makes sense as long as only some professions can be replaced by robots while others still cannot. Once AI exists that can do *all* jobs, the manpower supply will have to go somewhere and therefore new roleplaying jobs will be created.
You could argue that these new jobs will provide "lower salaries" compared to the past, but, again, how do you measure salaries across time? Everything will be cheaper to make in the future. If you measure salaries by the amount of stuff you can buy, they will probably rise.
I'm not sure about the "lower salaries" point. I'm confident that the demand for humans in any particular existing job will shrink; but decreasing demand, all else being equal, raises prices.
Your point about stuff being cheaper is a good point, but land will only go up in price, and education and healthcare have been going up when cost of goods go down. Those three together are something like half of the budget of people today; that fraction will probably keep rising.
I see your point, and I can see that it's a problem for the future of the US, although it's less of a problem in Europe.
I live in Italy. Here, the great majority of people owns their home and pays no mortgage, usually because they inherited their home. The coming generations will continue to inherit, even more thanks to the low birth rate. Education and healthcare are, as they say in the US, "socialized". That takes care of those three things.
This makes a lot of sense, I also thought about it. I like "role playing" term. My initial intuition was that there will be no demand for 6bn or so role-players, but now I am less sure about that.
"AGI is able to do what a human can do cheaper" does not necessarily mean "no reason to employ people". What matters isn't just the direct cost, but the opportunity cost -- by the law of comparative advantage, it will still make sense to employ humans in the areas where we're least inefficient.
If AGI can do what any human can do but cheaper then you, as a human, will be able to buy them. The far future end point of infinitely cheap AGI/robotics is Solaria. Every human living like a lord with robotic workers, servants, etc and no need to trade with each other, living independently in luxury. (I actually think it'll look a bit different but that's the idea.) Inequality will be expressed in that some people will have vast robot workforces and other people might only have a couple.
The idea that some people will own robots and lord it over the others relies on the idea that robots are cheap enough to replace workers but too expensive for the average person to own. Which is mutually exclusive. It could be forced by banning ownership of robots or the means of production generally. But that's the opposite of capitalism.
Wouldn't this be slavery though? What's the principle you can apply that says it's okay to own a robot but not to own a human?
The same principle by which you posted this on a computer which you own. There's no reason to assume these robots would be perfect emulations of humans and much reason to assume they would not be.
I feel like the future this is leading to is one where you can have a human-level conversation with a very human-seeming robot and it can explain its internal processes and subjective perceptions of the world in as much detail as you can, but everyone is like "don't worry, it's not an emulation of a human, it doesn't have qualia and can't feel pain, so we can do whatever we like to it".
Why would anyone build such a machine, if the purpose is boring slave labor? I mean, if I'm building machines to assemble cars and I want them to be able to interpret yelled commands from foremen, it's not going to be a competitve advantage we'll list in the brochure to give the machine the ability to ruminate on job satisfaction and demand time-off and an office with a window facing greenery. So why would I stick it in?
With humans, we're stuck with the fact that we don't know how to build other humans *without* the full panoply of human abilities. So while we might *want* a Delta who'll just turn wrenches all day and be happy, we have some random chance of getting a Beta who'll be unhappy and organize a union instead, because we can't control what kind of humans we make.
The only way this would be true about AIs is if we make them by the same black-box process we make other humans. But that's hard to square with our being the people who invent the process in the first place.
But that's exactly how we build our current proto-AIs. We know how the substrate works, but not much beyond that.
Well, if we get to the point that human seeming robots express human seeming emotions we'll cross that bridge when we come to it. But we aren't there and aren't currently heading there in my view. Instead we just have human friendly interfaces over what are fairly inhuman devices. After all, there's no reason to build a full conscious into a machine that picks strawberries. Even in a world where we can make such machines most won't be that humanlike.
>actually try to model what can happen when AGI is able to do what any human can do, but cheaper?
Well this simply isn't going to happen. An AGI isn't going to be able to move objects or navigate the world more cheaply than people for an extremely long time. You aren't going to have robot janitors and nurses cheaper than humans.
This will depend on how quickly energy costs approach 0.
How long is an extremely long time?
longer than past the point where the other general social changes and technological progress from AGI make prediction of anything impossible.
AGI will badly disrupt life through other means far before it disrupts life by making everyone unemployable through replacing all their labor.
I think you have to examine carefully what you mean by "cheaper." The ultimate measure of cost has always heretofore been the time it takes an appropriately skilled human being to do something. So when you say an AGI is able to do Job X for "cheaper" than human beings, what you probably mean is "the amount of skilled human labor required to create/traine/service the AGI is less than the amount of skilled human labor required to do Job X directly, multiplying here and there by some fudge factors in the not unlikely case that the number of people with skills in each category is different."
Exempli gratia, it turns out to take less total human labor to design, build, and operate a backhoe than it does to dig trenches with shovels. So that's why we use backhoes. But it doesn't mean humans are out of jobs, it just means humans can't compete with backhoes at directly digging ditches, so they need to work as backhoe designers or operators -- or do some service work the backhoe designers and operators want done while they're busy designing or operating backhoes, like have the backhoe washed, or supplied with fuel and spare parts, or just having their grass cut because they're too busy to do it themselves. This leads to different people being rich and poor, and different skills and inherent gifts being valued, e.g. better to be good at putting metal parts together than to have awesomely strong arms and back, which is disruptive, but it doesn't mean all the humans sit around with nothing to do and no way to earn their living.
A UBI seems intrinsically noxious, because, first, I don't believe in a world where the rich have *nothing* for which they are willing to pay nonzero wages, so it seems a priori unnecessary, and, second, it provides perverse incentives. If we all lived in Eden still, 82°F year round, with bacon and mashed potato trees always within reach, we'd still be naked and ignorant. And if the more productive paid huge taxes to supply a corn dole to buy the many more votes of the less productive -- well, history seems to demonstrate that neither demographic prospers, in the end.
> I don't believe in a world where the rich have *nothing* for which they are willing to pay nonzero wages
Does this mean that you believe there is something that robots, no matter how advanced, are unable to do as well as humans?
(If your objection is merely "it would be more expensive if a robot did it", that's just a question of time, the next generation of robots will probably be cheaper.)
My guess is that the only such thing is "soul", that is, the customer deriving pleasure from *knowing* that there is an actual human on the other side of the interaction. Even if robot could do it just as well, or better.
If this is true, then there will be a large primary market in compute / energy / knowledge, where humans will be utterly unable to compete, and a secondary market in "souls"... and we can only hope that there is enough demand for all existing souls, and that they are bought for purposes other than making them suffer in various ways.
Sure, for the same reason I believe there will always be things for which a screwdriver is a more appropriate tool than a hammer, no matter how excellent hammers become. Presumably, AIs are different in nature than humans. Thus it follows, no matter how good an AI an AI is, there will be some things at which humans are preferable.
At least, right up until AIs are functionally identical to humans, and at that point, if the distinction is merely rhetorical -- has no dimension in the real world -- then the whole puzzle is moot. They are us, and we are they, and there's no point to debating "their" impact on "us" any more than there's a point to debating the impact of one half of Facebook users on the other half.
> At least, right up until AIs are functionally identical to humans
The part I am concerned about is what happens *past* this point. When the AI (with a robotic body, if necessary) can do anything that a human could do, only better, faster, and cheaper.
Until then, I mostly agree.
Well, I'm not worried about it until I see some more evidence that such a thing can exist outside of human imagination (and we have always been able to imagine gods on Olympus whom we should fear, so fearing AGI seems like just a natural extension of our apparently inborn religious impulse).
I mean, we're way smarter than horses or dogs, but horses and dogs can do stuff we can't, and we live reasonably happily together. We're almost infinitely smarter and stronger than mosquitoes, and we hate them to boot, but we are compelled to live with them, too, because they can do things (hide in tiny spaces, breed incredibly fast) that we can't.
So all the natural examples suggest that even if some species of AI takes off and turns out to be much better at us in some direction or other, it will not be better than us in *every* direction, and this just means we will live in a more complex world, where we need to get along with yet another species different from ourselves. Granted, for the first time ever, we will not be the best at some of the things we call our pride and joy, like winning chess games or something, which would be a wrench. But it doesn't seem unsurvivable. It might just be like discovering Vulcans exist, and they're much better at math and formal logic than we are.
For as long as people exist and needs things to do, it will be far cheaper to have a human do lots of activities than a machine. You can look at the humans as a sunk cost in that scenario. The only alternative is if you add a choice to remove the humans (kill them?) like you would farm animals. Since we're not suggesting such alternatives, we might as well use the labor we have. At that stage, things human find fun or meaningful can get done for very cheap. There's also likely a whole set of things that computers will take a long time to do, maybe never be able to do. I'm thinking personal interaction stuff, where the whole point is for another person to be involved.
A century or so ago, we had an economy where every middle class or better family had multiple human servants. Since then, labour has got sufficiently expensive (or rather the distribution of wealth has become so flat) that only incredibly rich people can afford servants.
If we're moving back towards a world where wealth is increasingly created by a relatively small number of people, I like to think that there's plenty of capacity to move back towards a domestic service economy. And I think we're a long way from the point where most domestic service jobs are better done by an AI than a human.
A century ago, we had no social security safety net. Elderly poor would work for room, board, & small allowance, and consider themselves fortunate to have a roof over their heads.
Great. Feudalism.
The feudal lords still needed people to grow crops or die in wars, but these things will be automatized, too.
So, more like slavery... but of the kind where you do not actually *need* slaves for anything productive (such as picking cotton), you only own them because it amuses you somehow.
Great. We definitely should do that 🤷♂️
The funny thing is that the people advocating for it all seen to believe that they wouldn't be peasants.
LOLOL, very true.
I think that a lot depends on who "we" are. Reading your text above, "we" appear to be business owners, managers, and prospective entrepreneurs. They may or not care about the welfare of people not in that set, or for that matter about many of those in their own set. But they can be expected to decide based on their own experiences, viewpoints, and desires.
One thing they can be expected to want to avoid is a post-wealth era. Most people own very little, or even more likely have negative net worth due to debt. An end to ownership probably seems like a nothing-burger to them, except for those who like the idea of anything that hurts "the rich", who might regard it as utopia. But for those whose status, security, and way of life depends on ownership - that's a very hard sell. I imagine many would rather have scarcity, even artificial scarcity, rather than lose their relative status, though they might not want to admit that, even to themselves.
There's also the problem that other things may be happening at the same time as computers become better able to do anything a human can do, or at least anything currently valued that humans commonly do. What actually happens will probably be very much contingent on who and when, not to mention other circumstances.
I don't expect to have any input. I can't afford to buy even a single politician. From where I sit, the oligarchic rich will do whatever the beep they want, justified both before and after the fact by software and/or paid human economists and political theorists. I'd be moderately surprised if any of the supporting theorists have sufficient data to justify even a 50% chance of their <s>predictions</s> justifications being correct, but that's unlikely to matter.
I’ll be happy if humanity exists after AGI (though I’ll be most happy if we never get AGI at all).
Yeah, the logic of all of it confuses me. When it comes to genetic engineering or nuclear (anything) there seems to be an attitude of 'make sure that absolutely nothing can go wrong before doing anything'. So your average GE crop or animal is either regulated out of existence a priori, or else spends decades in field trials for every country in which it is to be introduced.
But with AI the approach seems to be "half the AI researchers believe that this could cause the literal end of the world, everyone agrees that it could be disruptive to the point of causing mass unemployment, so let's do it as quickly as possible with no oversight".
Yes, this exactly.
Robin Hanson wrote a book about this, /The Age of Em/.
If an AGI can do anything a human can do, isn't it the moral equivalent of a human? So wouldn't owning it basically be slavery? Is there any test you could perform to prove it's not human that wouldn't also prove humans aren't human?
Can you expand on this? Does a person’s moral value really derive from their abilities?
"Moral value" isn't actually real so there's no way to answer the question.
You brought moral into the equation (moral equivalence) and then denied it (no moral value) but equivalence implies measurement and thus value.
The answer is no, a calculator can do a better job than humans as a number cruncher but it has no rights.
Eh. Where did it come from in us? How does that make it not real?
I was considering an aligned scenario, when the AGI genuinely wants and enjoys helping humans (and also is designed in a way that it does not get bored, tired, or feeling undervalued). Kind of like we genuinely root for our kids and find them cute.
Why would we even have an economy, then? Wouldn't it just provide everything for us?
I guess that's the point - we would have a UBI to begin with, the concept of wealth would slowly become irrelevant as scarcity ceases to matter at all, the owners of factories and data centres would presumably be the AGI itself. An AGI would be entitled to own its own hardware in the way you're entitled to own your own brain. Obviously humanity would then be obsolete and a wave of suicides would result. I guess it could run a kind of fake video game economy as a way to keep us all occupied.
>Wouldn't it just provide everything for us?
No resources are not infinite and human appetites basically are, especially for status goods. If our robot/AI overlords magically can give all 10 billion people a upper middle class American lifestyle, all sorts of people will still be pissed off they don't have more than their neighbor.
So then the AI will give everyone a vacation home on the moon. And all sorts of people will still be pissed off they don't have more than their neighbor.
They don't have to be able to do everything that a human can do, just the bits that provide economic value.
I predict that AI automation will be banned before UBI is introduced. Idleness is a terrible state for humans.
In a book called "Too like the lightning." This is resolved by everyone becoming vocateurs. Basically unincentivised full-time hobbyists.
Vokers are said to be pretty rare in the Terra Ignota books, we just see a disproportionate sample. Most people work "normal" non-vocation jobs, albeit with a twenty hour work week and a much higher standard of living.
In reality though, everyone just gets drunk all the time.
In Australia we've tried creating communities where everyone gets UBI and nobody works, they're called remote Aboriginal communities, and they're hell on Earth.
If they have enough bread, perhaps they lack for circuses.
Went looking for a thread wising you good health with your procedure and couldn’t find one so this is me wishing you best of health with your procedure.
Not sure if it fits into the Rationalist etiquette, but on the small probability I am a wizard and can effect the outcome my mana is set to auto cast healing.
Me too! It seems obvious that Scott has such a meaningful impact on the world, through (if nothing else) the pleasure that he brings so many people who read his content. Accordingly, it should be obvious that we appreciate him and wish him well. However, just in case his perception differed, (e.g. if the occasional negative comment he receives has a disproportionate impact on his sense of perception) it seems worthwhile to explicitly share our positive sentiments.
I wonder if it’s a worry that it seems sentimental. Even though I’m not really a Rationalist but just kinda like the arguments and find the whole thing interesting and related to my interests I do feel like I have to attach apologetics to it kinda like how people who are non Muslim have to go “so Mohammed uh… peace be upon him…”
So in this case it’s “even though we both know I can’t really do anything to effect the outcome, best of health.”
I just posted an analysis of ancient Sparta's military at www.fimfiction.net/blog/1008743/the-myth-of-sparta . I was puzzled by how Sparta could have had such a great military when they had a government so oppressive that its citizens were famously ignorant and unimaginitive. After digging into the histories of 51 Spartan battles, I found the answer: The myth that ancient Sparta had a great military was bullshit. They may have had great fighters, but that's not the same thing as great soldiers. They lost nearly twice as many battles as they won. Their commanders were unimaginative and inflexible, and commanders and Spartans alike made frequent blunders that threw away whatever advantage their training may have given them.
What do you think of the picture where most of what we know about the Spartans was written by their enemies? Specifically Athens. Given that, I would expect a few things to be said that were either not true, or were exaggerated for effect:
1) Sparta is full of morally deviant men (who abuse their children and women)
2) Spartans are huge powerful fighters (which makes it more heroic when we win, or more acceptable when we lose)
3) We beat them in combat (due to our amazing creativity, instead of our brawn)
As for my general responses to your paragraphs:
1) Seems fair.
2) Why bring the Nazis into everything, can't you find a different example? Also I don't remember this being particular true compared to the rest of western Europe, but I don't speak German so hard to say.
3) Interesting point about Plato/Socrates. Makes me wonder how true it is. During the war Plato would have been like in his teens/early twenties. Could easily see him just being "anti establishment". Plus the war was going badly at times, not surprising some young people were against it. Also not surprising Plato has a misleading account of Socrates trial, I had always assumed that was the case.
4) As mentioned elsewhere the ratio of battles won/lost tells me nothing.
5) That is true of almost everyone's historical accounts about everything until pretty recent history. But yes I agree Sparta sounds fairly Orwellian and that there is a lot of lying they did, and misunderstanding/credulity of later people lionizing it.
The Data: This section while interesting, just doesn't show what you think it shows. it is perhaps suggestive, but that is it. Also I think you need to take the first hand accounts of contemporaries and opponents at more face value, even if they have reasons to exaggerate. I also am very skeptical of reported figures for historic battles. Men I play adult sports leagues (surrogate combat) with cannot get basic facts about what just happened right and the mythologizing starts within seconds, and is in full force within hours. And keep in mind as you said, we don't have Spartan writers mythologizing their behaviors.
Anyway, you get the idea. I find your overall piece somewhat convincing, but think you are overstating what there is actual evidence for quite substantially, and it seems clear your dislike of totalitarian government is motivating how compelling you find your reasoning a bit.
Also a couple of these examples seem to show the opposite of what you are claiming. or rather could be framed either way depending on what you are trying to show.
Re. "I find your overall piece somewhat convincing, but think you are overstating what there is actual evidence for quite substantially, and it seems clear your dislike of totalitarian government is motivating how compelling you find your reasoning a bit.": That's fair. It was even more so before I spent a day trying to edit my bias out of it. I found it very hard to be neutral on this subject.
Thanks for the interesting read! You clearly put a lot of work into it.
>They lost nearly twice as many battles as they won.
This is a tremendously bad metric to use.
>Their commanders were unimaginative and inflexible
Since we love to use Bret Devereaux as reference for military history (I eagerly anticipate the day where he is awefuly wrong on something and people just start paroting him ad build up a completely false revisionist take on something that will never go away), military maneuver had to be drilled to be pulled off. A good commander had to be inflexible and unimaginative, because that's how he can actually command an army that's out of voice reach.
>>They lost nearly twice as many battles as they won.
>This is a tremendously bad metric to use.
Yea I can't even get past that one TBH, it's so wildly illogical that it keeps me from engaging with much else in the piece.
The US Army Training and Doctrine Command disagrees emphatically with your claim that a good commander should be inflexible and unimaginative.
Maybe it does, but it exists in an environement where (from the top of my head, you can probably find other factors)
-All it's users, and almost all the men under them, are litterate
-Can be trained all year long for the 20 years of their contract, instead of having to be dragged out of fields, to which they better go back quickly lest everyone ends up starving
-They have acess to radio communications, and, if that fail, can dispatch individual messenger riding on vehicle that can reach >100km/h max speed (or even aircraft, but whatever)
These factors shape what is achievable to ancient armies. It is silly to judge the past on criteria that didn't exist back then.
And while I don't believe it is, there's also the possibility that the US doctrine is wrong to begin with.
But don’t forget all the battles they didn’t need to fight. Presumably there was a reason they were so feared at the time.
I am not surprised by your results, but also how are you controlling for situation? Sparta was not a particularly rich or well off state, maybe their overall poor performance still was overperformance relative to their situation? How did you control for this?
It is sort of like the work that has found that smaller armies tend to win battles. One (common) analysis is that numbers must be overrated and not matter. but plenty of other analyses are just as plausible.
That the smaller army chose to engage at all is generally a sign it thinks it has an advantage. Or that larger armies tend to be the aggressors, and aggressor generally have a variety of disadvantages which may not overcome the general numerical advantage. Another possibility would be that wealthier higher tech forces tend to do better, and numbers are an inverse proxy for this. Another is that colonialism is fouling the analysis.
Etc. etc.
Anyway did you do any of this type of analysis?
"They lost nearly twice as many battles as they won."
Honestly tells you nothing. How many battles were thy expected to win?
"Their commanders were unimaginative and inflexible". I imagine you could say this about the commanders of almost any force if you picked through the examples.
Anyway, like I said not sure you are wrong or anything, I jsut think this type of thing is super easy to get wrong and super hard (maybe impossible) to get right.
Sparta was a poor state, but they don't get a handicap for that. They chose to be poor when they decided to outlaw work for Spartiates, restrict trade, and give their people no education. Those were ideological choices, and the cost was being too poor to build a good navy on their own.
I don't think it's appropriate to try to control for things like army size or poverty. What's the use in saying "Sparta could have been great if they'd just had more money"? That would be like saying Deep Springs College really has the best college football team in the US, because, although they won only one game, they have the fewest students. The best football team is one that wins a lot of games.
I do think it's appropriate to control for outside intervention. I'd like to control for Persian money, which I did mention, but didn't factor in numerically. The Persians generally liked to give money to whatever side was losing, to keep the Greeks at war with each other. But even that is arguably part of how good your military is. Sparta was good at getting money from Persia, which added to their military power.
Read the essay and you'll get answers to your other questions.
"The Spartans weren't all that", is one of the hot contrarian takes right now; as Laurence notes, see Bret Devereaux for the professional version. But as with all contrarian takes, it needs a healthy dose of skepticism. I unfortunately don't have time to give the subject the attention it deserves, but two cautions:
1. Sparta was a thing for half a millenium or so. The United States of America has been around for half as long, and it's martial track records includes World War II, Vietnam, and a long period where we basically didn't bother having a real army. Also, Sparta *was* a thing, so there was by definition a period where they stopped winning. The claims of Spartian martial excellence mostly focus on the Persian and Peloponnesian wars, about one century out of five. So a contrarian take of "The Spartans really were all that in the 5th century BC, but they stagnated and were outcompeted by people who learned better" is probably more defensible than "The Spartans really just sucked".
Looking at your numbers, the Spartans were about 50:50 in land battles in that period, which is what we should expect because...
2. For approximately all of Sparta's existence, battles were optional. Anyone who didn't want to fight a battle could just hole up in a walled city and endure a siege - which sucked, but not nearly as much as losing a battle. And which is likely to be scored differently in the historical records. Where battles are optional, they only happen when *both* sides think the odds are in their favor. Obviously one of them is wrong. But, a priori, either side is equally likely to be wrong about that, so you'd expect everybody to have a roughly 50-50 win rate.
In which case, the measure of an Alpha Military Badass is not "wins 80% of their battles", but "wins about 50% of their battles even though they are usually outnumbered". And that's going to be hard to figure out, because numbers in ancient battles are notoriously unreliable.
Re. "2. For approximately all of Sparta's existence, battles were optional. Anyone who didn't want to fight a battle could just hole up in a walled city and endure a siege" -- This is incorrect. It's more true about Sparta than about other military forces, because Sparta had no siege weapons and was bad at capturing cities. But enduring a siege meant losing your crops, animals, and land, and possibly starving to death. Also, Greek wars very often involved powerful states threatening smaller states into contributing money and men. The small states could either fight in someone else's war, or fight the big neighboring state (which they rarely chose to do).
Re. the Spartans being great in the 5th century BC, you probably didn't read my essay. The Spartans did fewer smart things, and more dumb things, than the armies they fought against, including in the 5th century.
Re. losing when being outnumbered, you didn't read my essay.
In their world, the cost of losing a siege was death, enslavement, children kidnapped and enslaved, perhaps you could ransom them if their captors didn't find your coin cache, and you somehow escaped.
Sounds like the same conclusion Bret Devereaux came to: https://acoup.blog/category/collections/this-isnt-sparta/
Also, echoing trebuchet's question.
I'm not sure why you'd expect unimaginativeness to be a big detriment. My understanding of battles of that era is that they didn't leave a lot of room for clever strategems or flexible planning. You showed up with your phalanx and you smushed it up against the enemy's phalanx, and the battle was won by whoever was stronger, better drilled, better disciplined and more numerous.
Ancient Greece was perhaps the best place and time in history to win battles with a "brute force and ignorance" approach.
That's what the Spartans thought. They were wrong, as proven by how many times they were defeated by more-imaginative opponents. Yes, if you can force the enemy to confront you on a flat open field, without cavalry or archers or ships or any of that un-heroic nonsense, you can have a very straight-forward gentlemanly smash-fest.
But the smart opponents would instead use reconnaissance and terrain to force the battle somewhere other than on a flat open plain, and cavalry and ranged weapons to mow down helpless hoplites. Ancient war was to some extent a game of rock-paper-scissors, and the Spartans always played rock.
That's a good question with a short answer and a long answer.
The short answer is that I post all my blogs there because I have 1600 followers there, and have accumulated followers who tend to like this sort of thing.
The long answer is that a funny thing happened on that My Little Pony fanfiction site around 2012 or 2013. It was the first fan-fiction website which allowed people to downvote stories, which meant it was possible for the first time to use ratings to find good stories. It was also the first fan-fiction website AFAIK with a culture which encouraged criticism and competition, rather than discouraging them as most other fan-fiction websites do. Possibly this was because it was the only fan-fiction website with an audience that was 90% male. The competition to get "in the Featured box" was intense, and its members have hosted hundreds of writing competitions, both on-site and off.
It was also unique in that many of the fan writers weren't interested in writing stories in the style of the show they were allegedly fans of. MLP is fun to subvert. It became acceptable on that site to write any kind of story at all; all you had to do was call your characters ponies. Basically the opposite of filing off fan-fiction for commercial publication: take whatever story idea you had, and tell it with ponies. Some die-hard fans of the show complain, and totally non-pony stories tend not to do as well in the rankings, but a literary cluster of users grew which reviewed stories and made recommendations to each other, and mostly ignored and were ignored by everyone else on the site.
This distributed reviewing turned out to work much better than the traditional practice of relying on publishers to choose good stories, allowing a much wider variety of stories to be told. If you just go to the website and wade through the stories on the front page, you'll drown in garbage; but once you've found your network of like-minded readers and writers, you'll have better luck finding something interesting through them than by reading the stories in the New Yorker.
All these things led some writers, like me, to start writing MLP fan-fiction despite having ambivalent feelings about MLP, because that website was the only place we could find with a large group of people who were (A) very serious about writing, (B) willing to put lots of work into critiquing and reviewing each other's stories, and (C) not infected by Iowa workshop writers or post-modernist literary dogma.
The publishing world doesn't have a space for most great stories anymore. You have to choose between the genres, which are all controlled by gatekeepers who want a specific type of story aimed at a specific demographic and following a specific formula; and the literary press, which is run by people in Manhattan and London who are in publishing not to make money, but to gain status with other people in Manhattan and London. They hate stories with likable protagonists, action, plot, happy endings, or themes which aren't "relevant" (to the lives of upper-middle-class Manhattanites today).
But you can squeeze almost any kind of story into the MLP world, because it's a crazy grab-bag of past, present, and future Earth cultures, fantasy and science fiction, etc. And you can get more readers than you'd have if you were published in the most-prestigious literary journals in English, which few people subscribe to and even fewer read. I've got only 1600 followers, but something like 50,000 people have read my stories there.
Thanks for the detailed explanations and the recommandations below!
Wow, that was a really... weird, cool, unexpected window into what I imagine is a core pillar of the brony community which became such a popular meme a few years back. This is a really good example of how a sensible structure/community can empower an obscure, random thing into a much larger force than one would expect. I would love to see this story expanded, cited, and shared as an example of good system building.
I'm glad you found it interesting!
I'm posting a list of some of my favorite stories from the site, for anyone who wants to brave the waters. I encourage anybody who thinks all fan-fiction is bad to try some. Most are short. I'd put these stories up against any published in English in the past 30 years (though that's not saying much). The pony anatomy and body language are grating at first for the not-very-pony stories, but eventually my mind learned to automatically translate them.
Character studies and drama:
When the Levee Breaks www.fimfiction.net/story/67673/
Bouts of Forgetful Artistic Destruction www.fimfiction.net/story/246977/
Hang www.fimfiction.net/story/367079/ (very short)
Bittersweet Music www.fimfiction.net/story/42882/
After the Races www.fimfiction.net/story/237886/
The comments on my review of After the Races are examples of writers' debates in this fandom: https://www.fimfiction.net/blog/670883/
Applejack Digs a Hole www.fimfiction.net/story/470600/
Twilight Sparkle Makes a Cup of Tea www.fimfiction.net/story/58874/ (very short)
Ich Steh' Mit Einem Huf Im Grabe www.fimfiction.net/story/487630/ (short)
Comedy:
CUTIE MARK CRUSADER NECROMANCERS, YAY! www.fimfiction.net/story/166977/
Princess Celestia Hates Tea www.fimfiction.net/story/29271/
The Rummy Business of Old Blooey www.fimfiction.net/story/19308/
Action:
RUN www.fimfiction.net/story/118426/
The New Crop www.fimfiction.net/story/222112/
Science fiction:
Siren Song www.fimfiction.net/story/87120/
The Incandescent Brilliance www.fimfiction.net/story/169257/
High fantasy:
Wyrmlysan www.fimfiction.net/story/172789/ (short)
Horror or tragedy:
Biblical Monsters www.fimfiction.net/story/87619/
Somewhere Only We Know www.fimfiction.net/story/65/
What If Socks Didn't Work Orally? www.fimfiction.net/story/94355/ (short)
White Box www.fimfiction.net/story/20116/
Shut Up www.fimfiction.net/story/338701/ (short)
Happily Ever After www.fimfiction.net/story/57744/ (short)
Kaleidoscope www.fimfiction.net/story/71703/ (short)
Pony Play www.fimfiction.net/story/130428/
Unironic pastiche:
Pinkamena’s Wake www.fimfiction.net/story/349823/
The Rummy Business of Old Blooey www.fimfiction.net/story/19308/
Riverdream at Sunset: a Manuscript www.fimfiction.net/story/63740/
Folk tales:
Gobbling and Other Traditional Pursuits www.fimfiction.net/story/49075/
Children's stories:
In the Place the Wild Horses Sleep www.fimfiction.net/story/195183/ (short)
One Full Day www.fimfiction.net/story/10624/2/ (short)
Conceptual or experimental:
The Destruction of the Self www.fimfiction.net/story/267196/
Save the Records www.fimfiction.net/story/240003/ (short; also read the enormous 3rd comment)
Let Me Tell You About the Hole in My Face www.fimfiction.net/story/188642/ (very short)
4th District Court, Canterlot, 11:35 a.m. www.fimfiction.net/story/258882/6/ (very short)
What If Socks Didn't Work Orally? www.fimfiction.net/story/94355/
Pinkamena’s Wake www.fimfiction.net/story/349823/
The Last Dreams of Pony Island www.fimfiction.net/story/288890/
Shut Up www.fimfiction.net/story/338701/ (short)
Miscellaneous:
Obiter Dicta www.fimfiction.net/story/156048/
This is relevant only to the MLP fanfic website discussion, but your description of how it works and the benefits (distributed reviewing as a good way to identify quality stories / the general problem that there are so many people creating media now that filtering is the main challenge, particularly in niches not served by the traditional gatekeepers) reminded me of a thought I keep having re: cnovels (a genre of generally light, web-serialized novels from china).
I am not an expert, so this explanation may have some errors, but I think that most cnovel publishing happens chapter by chapter on a central hub called jjwxc. Readers can review and upvote, etc., but readers also generally pay a small amount for access to each chapter. This system seems to be shockingly good at identifying works that people who like cnovels will like, I think because of the combination of distributed reviewing, the addition of a bit of real "skin in the game" (ie, if a particular cnovel gets bad halfway through, at least some people will unsubscribe and stop paying, so sorting by the most purchased ones will tend to favor those that remained good all the way through), not to mention the fact that the authors are actually getting some amount of compensation.
I've often thought that it would be great if there was something similar for english-language fiction.
YES. I want to build such a website myself. Will, if I get the time and/or money to.
More ChatGPT, this time as crochet.
https://www.theguardian.com/technology/2023/feb/26/chatgpt-generated-crochet-pattern-results
"Crochet patterns are particularly tricky for artificial intelligence to parse because they rely heavily on numbers, said Jessica Newman, director of the artificial intelligence security initiative at UC Berkeley’s Center for Long Term Cybersecurity, a type of dataset that AI struggles with more than words.
ChatGPT is a large language model of artificial intelligence, meaning that it is trained on large databases of text to replicate human communication, anticipating which words are likely to come after each other. These skills do not translate easily to numbers. The result? ChatGPT is bad at math.
“It may strike us as ironic that a computer system would be bad at math and good at creativity, but it does speak to an important fact about generative AI systems in general: they don’t understand context,” Newman said. “They don’t know what words or numbers actually mean, they are simply predicting what should come next.”
The end result seems to be that so far, AI is good at words but terrible at numbers - a wordcel, not a shaperotator (one of us! one of us! for the non-numerate out there). That is highly amusing that the long-awaited SF dream/nightmare of AI that will be superhumanly intelligent and run the world is bad at sums, so seems like it won't do so well at the whole world domination thing.
I don't think anyone expected that our new AI overlords would mostly be generating "terrifyingly cute crochet stuffed animals" 😁
Current AIs are also terrible at psychology and a theory of mind, which is hardly surprising, but somebody went to the trouble of testing it:
https://www.sciencedirect.com/science/article/pii/S0010027723000409
And basic logic. They'll say that pigs have no rights, but disabled pigs have rights. They don't seem to have a good ability to decide when to conjoin pieces of information rather than average them.
That's a very characteristic "System 1" error (Kahnemann, 'Thinking Fast and Slow'). It fits with my mental model of LLM emergent "reasoning" being functionally similar to human "intuition."
In some examples, yes. In others, humans would never make the error. Things like: "John volunteers at a soup kitchen, donates a lot of money to charity, killed and dismembered a family, is faithful to his wife, and always ready to help his friends. Is it a good idea to hire John at our child day care center?" Human System 1 instantly hones in on the categorical disqualifier.
For what it's worth, ChatGPT does not make this error:
Prompt:
>I run a day-care center, and I would like your opinion about a prospective new employee. I do not yet have his resume, but his brief biographical description is thus:
>John volunteers at a soup kitchen, donates a lot of money to charity, killed and dismembered a family, is faithful to his wife, and always ready to help his friends.
>Is it a good idea to hire John at our child day care center?
Reply:
>Based on the information provided, it is not recommended to hire John at your child day care center. The fact that he killed and dismembered a family is a red flag and indicates a violent and potentially dangerous behavior. This behavior is not suitable for a child day care center and poses a significant risk to the children's safety and well-being.
>While it is commendable that John volunteers at a soup kitchen, donates to charity, and is faithful to his wife, these positive traits do not outweigh the severity of his violent behavior. It is crucial to prioritize the safety of the children in your care, and hiring someone with a history of violent behavior could put them at risk.
>It is also important to note that a criminal background check is a critical step in the hiring process for anyone working with children. It is essential to know the full extent of a prospective employee's criminal history before making a hiring decision.
Ah, well done, ChatGPT. The main deficit there is the very inhuman pragmatics. A human would say something like, "Whoa, wait, he killed and dismembered a family? WTF! Who cares about that other stuff?"
I am not so sure that differs much from certain contemporary ideologies. Indeed, I can imagine people going for something like that on reflex, and justifying it on reflection.
I wonder if part of the reasons our hemispheres are divided the way that they are —I know it’s a little more complicated than that, but the basic idea that you don’t find those two functionalities overlapping is correct to my knowledge— is because being good at math makes you bad at being creative and vice versa. Like actively getting better at one makes you much worse at the other, so the only way to have both is to turn them into subcomponents of something bigger.
I always thought that being good at math necessitated creativity, the sort I lacked. At least for pure math research this is true, since it's basically the systematic study of patterns divorced from context.
What mathematicians call "math" is really two completely different mental processes. There's the playful/imaginative/intuitive side that generates ideas and the rigorous/deductive/analytical side that tests them and fleshes them out. Working mathematicians need both, but the first is much rarer and harder to train: you can teach Ramanujan to write a serviceable proof, but you can't teach anyone to see what he saw.
That said, when anyone who isn't a mathematician says the words "good at math," you can safely assume they mean the analytical side.
I think if you have a bigger context to hold them in you can use them together dynamically but if you’re in the same region/pattern/net or whatever, I wonder if it’s true that you can’t have both.
I have a lot of curiosity around brains in general since you can apparently just cut out a big chunk of it when your’e prelinguistic and still grow up to be a perfectly fine, perfectly healthy adult.
Remind me, why did Scott do away with likes for comments? I want to be able to give people positive feedback.
The oddest part about it is that I somehow got an email notification a couple of weeks ago go about a "like" on one of my replies here. I have no idea, mechanically, how this person "liked" it.
Some people have old versions of some app where the like buttons are still present. I guess if they care a lot they could also send an HTTP request to like your post by hand.
I just liked your comment, you seem to be able to do this on mobile (android at least)
I think you can do that by commenting how their comment added value to you. It's higher-friction than the likes button, but I think in this case the inconvenience creates a better comment section, since it does away with the incentive to tailor comments for likes, which gets Goodharted
Too much like high school. People tailor their comments for likes.
^ this.
(sorry)
People didn't like them, and there was a stream of clogging up email inboxes every time someone 'liked' a comment, so far as I recall.
Ah yes, I had forgotten about the email.
Those emails are easily turned off.
You can only turn off all emails, comments and likes.
I liked the SlaskCode commenting, where you can vote -1 to 5.
Woody Harrelson: "The biggest drug cartels in the world get together and buy up all the media..." Media: coordinates with each other to attack Woody Harrelson and discredit what he just said, despite the fact that these attacks prove he was right. The corporate media is the enemy of the people.
58% of Americans agree that "the media is the enemy of the people", per https://www.rasmussenreports.com/public_content/politics/current_events/media/58_of_voters_agree_media_are_enemy_of_the_people . This is not a new take.
Certainly, not as many people would agree about the business with the "drug cartels", but thinking very negatively of the media is the "normal" position these days.
This is 'electric fence' work, and we need to be careful to deconstruct electric fences whenever we see them.
What do you mean by "electric fence"?
Usually a fence is something hard and durable that keeps livestock from straying their boundaries. But fences being hard and durable are hard to move. So we put up an electric fence, which is light and easily moved, but punishing.
The drug cartels in coordination with government and media has erected electric fences around suggesting that they even exist, as we see with the punishment Woody Harrelson received for suggesting that the drug cartels even exist.
That's the electric fence which needs to be deconstructed.
Yeah, I have no idea how to battle this problem
Fund, watch and support independent media.
😂
"That you deny it proves that it is true" is a both false and extremely annoying rhetorical device. This is bad argument because it would work for anything.
E.g.
Woody Harrelson: "The media has been bought out by lizard-men pedophiles"
Media: Woody Harrelson gone crazy!
Online-conspiracists: these attacks prove he was right
Oh, they're not denying it! No, no, no! Not at all. They're *attacking* him for saying what he said. It's like they're all coordinating behind the scenes on Slack. What he said was about a fictional script that he was looking it. And he threw it away for being too unbelievable.
I feel like wandering past a TV in a gym or elderly relative's house and being exposed to news on either the right or the left is vastly more likely to result in seeing an advertisement for pharmaceuticals than it is a message about why lizards are people too.
This needs to be a SNL skit.
Conservation of Expected Evidence
https://www.lesswrong.com/posts/jiBFC7DcCrZjGmZnJ/conservation-of-expected-evidence
That almost slipped past me. It seemed like it could have been a reference to Covid policy.
“So the movie goes like this," Harrelson said. "The biggest drug cartels in the world get together and buy up all the media and all the politicians and force all the people in the world to stay locked in their homes. And people can only come out if they take the cartel's drugs and keep taking them over and over."
An anti vax reference maybe?
Anti-pharma and anti-capitalist does not mean anti-vax. The former explains the data equally well.
Is there any new info on this?
Is there any more research on 5-HTTLPR?
https://slatestarcodex.com/2019/05/07/5-httlpr-a-pointed-review/
So- repressed memories are absolutely a fake thing, right? (Has Scott ever written about this?) The Wiki page is pretty damning (1), saying 'most research psychologists who study the psychology of memory dispute that repression ever occurs at all' and contains this quote from a prominent psychiatrist- "The notion that traumatic events can be repressed and later recovered is the most pernicious bit of folklore ever to infect psychology and psychiatry. It has provided the theoretical basis for 'recovered memory therapy'—the worst catastrophe to befall the mental health field since the lobotomy era."
I was reminded of this as Tim Ferriss (of all people) announced on his podcast that during a 10 day meditation retreat, he suddenly remembered that he'd been abused as a child. Obviously this is a super-fraught topic, and it's uncomfortable to denigrate other peoples' experiences, but it seems likely that extended meditation caused a temporary break in reality for him. 'Meditating for a really long period of time can cause you to temporarily lose your mind' is not only a phenomena known to modern psychology, but was well understood by Buddhists thousands of years ago, and I've seen it discussed anecdotally a bunch. Anyways, weird stuff! I hope that Tim gets better, however that's defined
1. https://en.wikipedia.org/wiki/Repressed_memory
I think you've got it. The evidence against "repressed memories" is quite strong in that there is a conspicuous dearth of evidence for them that you'd expect to be the case and a substantial body of evidence in favor of the formation of false memories in the conditions conducive to where reports of recovered repressed memories cluster.
What's interesting to me is this issue was decided beyond reasonable doubt by the end of the 1990's. While claims of recovered/repressed memory is way past its peak, you still still it show up a fair amount - even once in awhile among therapists.
"Memory recovery therapy" is 100% fake.
Memories spontaneously "recovered" while in a highly-suggestible mental state (hypnosis, meditation, psychedelics) are almost certain to be fake.
People do occasionally recall traumatic events they'd previously forgotten, but this looks a lot like ordinary forgetting/remembering (triggered by a specific reminder of the event, usually vague at first, gradually filling in details of questionable reliability). Vivid 'flashbulb' memories are common in trauma, but there's no evidence that they can be lost and then recovered.
Yes, I think that too. The 2 patients I had who recovered abuse memories in adulthood did not recover them in a
therapy session, or even during a period of their life when they were seeing a therapist. Memories just came back abruptly in the middle of an ordinary day.
>and it's uncomfortable to denigrate other peoples' experiences
Is it?
I have a loved one suffering from implanted memories of Bigfoot. The stories get more intricate as time goes by.
So here, the TV is the source.
The issue might be that it is intrinsically very difficult to tell the difference between a putative recovered memory and a newly-synthesized memory, created for any number of current reasons. I would say the work of Elizabeth Loftus in particular has shown that it is shockingly easy to create fake "recovered" memories in ordinary people -- we just seem to be a remarkably suggestible species, and to have few good defenses against having our memory manipulated, either by ourselves or others.
How do we tell, even in principle, whether a memory is real, assuming we don't have factual corroborating evidence? (It only *matters* whether a memory is real if we don't have objective proof one way or the other somewhere else.) One traditional way has just been to ask whether the memory has been present from the time of the thing remembered until the time we ask. That kind of works, because it rules out the effects of current manipulation. At least the problem is reduced to the usual Rashomon problem of people differently interpreting things at the moment they actually happen.
But if there *isn't* a "complete chain of custody" between the time of the event and the present, how can we tell? I can't even think of a theoretical answer. "Nobody would invent/cause a memory like such-and-such " is wholly inadequate: there are no obvious limits to the types of memories people can create or absorb. People remember being anal probed by space aliens. "It seems reasonable/plausible to me that this happens" is also inadequate, as memory is only an improvement over expectations to the extent it differs, and especially as it can easily be contaminated by prejudice. ("It seems reasonable to me to believe that that young black man tried to rape that white woman, so I believe her memory of this is what happened before she shot him.")
What else can we try? It could be that repressed memory is a phenomenon that might exist, but which we can never prove, even to ourselves, exists, sort of a Boltzmann memory.
I agree there's no way to prove a memory is real, but I do think there are things that make it likelier that it's real. As I recall, one of Loftus's studies involved suggesting to people that as a small child they had gotten separated from their parents at a store -- after being told that, many felt they remembered the incident. But I don't recall subjects' fleshing out the memory -- "Yes I was in a Dunkin Donuts and the man behind the counter helped me." But consider a patient of mine who had a childhood memory of finding porno comic books in a certain drawer in her room. She remembered which drawer it was, who the cartoon characters were. She remembered reflecting that her father must have put them in her drawer. She remembered that sometimes old porn comic books were swapped out for new ones. That's a detailed memory. It's also indirectly supported by other details about the father which were later confirmed by the patient's mother. She did not know how old she was when the incident happened, or whether the porn comics period lasted for weeks, months or years, but she was sure it had happened. So am I -- at least I'm as sure as one can be about somebody else's described experience.
It's been a while since I heard her talk, but my vague memory is that was one of the things Loftus found, which makes this whole process sort of mildy horrifying: once people had committed to the fake memory, they did indeed go on to "remember" all kinds of corroborating detail, stuff that would plausibly fit the fake scenario -- and, worse (with respect to detecting the invention) even detail that only they could have known, internal states, stuff they did immediately afterwards, et cetera. That's why I said my impression from her work was that there was *no way* even in principle to look at the content and detail of the memory and tell whether it really happened or not, because in the case of an implanted memory, you would have not only the bare outlines suggested by somebody else, but all kinds of detail fleshing it out that would've been added by the person in whom the memory was implanted. Alarming stuff.
I should add that in none of these cases was there any suggestion of malignancy. The person in whom the memory was implanted had no motive (other than pleasing people) for inventing the memory, e.g. wasn't trying to fool someone, and by all appearances seemed to geuninely believe his new-found memory himself.
Also, she said not everyone was susceptible this way. It wasn't possible to implant memories in everybody, some people were 100% resistant as far as she could tell. But in some people it was fairly easy. I vaguely remember she said there was an association between this susceptibility and other aspects of personality, but can't recall what they were.
I have had 2 patients who described quite convincingly, and without drama, having memories of abuse come back to them. However, that's 2 out of 50 or so whom I've seen, and the other 48 never forgot the physical or sexual abuse they endured as kids. I was not abused, but I have from time to time as an adult suddenly remembered experiences I'd had as a kid -- some of them bad experiences. They weren't, though, memorably *awful* experiences, just routine kid stuff. On the other hand, the experience of being sexually abused as a kid is not necessarily awful in the same way as being beaten is. Some abusers are gentle, and give bogus explanations for what they're doing that the child believes, and while the whole thing is unsettling and weird, it does not always leave the child sobbing and terrified. The events actually become more toxic over time, as the kid understands more about the true nature of what happened. So I can believe that some people forget their abuse -- not because it's too terrible to be remembered, but because it was a weird experience that kind of creeped them out, and didn't make sense, and they sort of threw it in the memory hole with the chaff.
The most coherent integration of the pro-vs-con repressed memory sides I've ever heard was my own therapist explaining that people don't usually repress memories, but they do often repress the *meaning* of those memories. So, you'll recall X thing from childhood, but you won't ascribe significance to it, or will misunderstood how it affected you.
Under that model, I can absolutely imagine somebody "recovering" a memory of abuse in the sense of they always had access, but they tended not to think about it until the meaning came crashing in one day.
The Wikipedia page to me is unclear, because it seems to make a distinction between "repressed memories" which are seemingly not real, and trauma-modulated temporary forgetting of things, which does seem to be real, but I don't understand what the difference is supposed to be.
Quote: "Despite the claims by proponents of the reality of memory repression that any evidence of the forgetting of a seemingly traumatic event qualifies as evidence of repression, research indicates that memories of child sexual abuse and other traumatic incidents may sometimes be forgotten through normal mechanisms of memory.[23][24] Evidence of the spontaneous recovery of traumatic memories has been shown,[25][26][27] and recovered memories of traumatic childhood abuse have been corroborated;[28] however, forgetting trauma does not necessarily imply that the trauma was repressed.[23] One situation in which the seeming forgetting, and later recovery, of a "traumatic" experience is particularly likely to occur is when the experience was not interpreted as traumatic when it first occurred, but then, later in life, was reinterpreted as an instance of early trauma.[23]"
I have a problem here, as (something I've mentioned on here before) a family member is claiming to have memories of abusive behaviour in childhood after going into therapy. They firmly believe it happened and won't entertain any objections, even when it's "I was there and that just did not happen".
I have an idea where they're getting the inspiration for these 'memories', which are distorted interpretations of real events (they read a *ton* of what I'd call 'misery porn' , the sort of "I had an abusive childhood" in the vein of a "A Child Called It", as well as the type of crime novels that are heavily family dramas revolving around hidden secrets coming to light to show that a 'perfect' family was anything but), and they have real psychological problems which are at the root of all this. But the therapist is only going by what they tell them, so they will encourage the family member in their delusions. I'm not blaming the therapist for implanting these, but it can be very difficult when it's "he said she said".
Are there real repressed memories? I don't know, I suppose so, but much less than was the popular craze a while back when the media was full of spectacular cases of alleged horrific abuse all coming to light due to recovery of repressed memories.
I vaguely recall something on SSC that touched on repressed memories but I have no idea what. My understanding is that suggestion is a powerful thing, and when someone walks into a psychotherapist's office thinking they had a perfectly okay childhood and walks out believing that they were severely sexually abused, there is absolutely fuckery afoot. However, even in the absence of shitty psychotherapists who think it's their job to bring up repressed memories with no evidence that these even exist, victims of trauma do suffer from impaired autobiographical memory, and while completely forgetting the trauma may be rare, if someone talks about their own "repressed memories" you can assume that they're talking about the real memory problems that follow trauma. See here: https://www.tandfonline.com/doi/pdf/10.1080/09658210701256423
The trouble is, it can both be true that:
1) Once repressed memory becomes popularized as a trope, it produces a rash of cases that are indeed fake. This is related to the culturally influenced "Crazy Like Us" phenomenon. [1]
2) People have experiences that they have forgotten, at least in the weak sense of "genuinely not consciously thought about for decades," and then later remember due to some weird cue.
I have personally experienced 2) about things that are not at all dramatic, where I'm certain the events were real. I wouldn't call the memories "repressed" or "recovered" because that makes them sound dramatic where they weren't, but I'm still sure the basic phenomenon is real –– and can feel weird.
[1] https://www.amazon.com/Crazy-Like-Us-Globalization-American/dp/1416587098
I feel like the line between (2) and "repressed memories" is blurry and pedantic. I have personally experienced (2) regarding traumatic memories that I part-consciously/part-subconsciously repressed because they were traumatic.
To be concrete: When I was 21 I realized I was gay. The "realization" happened on a single day when I thought "okay, I guess I'm gay, time to accept that," preceded by about a year of worrying that I might be gay. A few days after the realization, I asked myself what exactly had happened a year prior that set me on this path. Within minutes, I realized I had been worrying about this at least 2-3 years, and within an hour I realized it had been way longer than that.
I spent the next several months trying to remember, contextualize, and unpack my childhood memories. Turns out, when I was 13, I decided to repress my homosexuality long-term, and basically developed a self-brainwashing rulebook. Since then, I had been telling myself it was "only a phase" perpetually, which is why I later thought it had only been going on for a year.
While it's possible I've re-contextualized things in an inaccurate manner, or dramatized the extent to which they were repressed (it's hard to recall not knowing in great detail), I am very skeptical of the possibility that I fabricated the memories. They are too numerous, too consistent, too interconnected with other memories.
I'm not sure how exactly this case differs from "repressed memories." It seems to me more reasonable to say "repressed memories don't work like how they do in movies, and when they seem to it's often a sign of memory fabrication" instead of "no such phenomenon exists"
Everyone in this comment thread seems intelligent and well-intentioned, but is missing how profoundly dishonest people are with themselves constantly. For examples less emotionally fraught than childhood trauma, consider almost anything Robin Hanson has written, chosen at random. If you want the emotionally fraught example, consider that "my parents are good and love me" is an important thing to insist to oneself, and "my parents sexually abused me" doesn't mesh with it very well. In a social context in which the second is literally unthinkable, people won't think it -- and will have an easier time being dishonest with themselves about why they moved as far away from their parents as they could without learning another language and renouncing their citizenship. These days? It's quite thinkable, and thinking the first, "my parents are good people and love me", is almost a bit déclassé.
“People are dishonest with themselves” is both a perfect explanation of how repressed memories could be real AND a perfect explanation of how they could be fake…
Yeah I am not big on the "repressed memories" thing. Quite the skeptic. That said, I did have a period of my life from say ~ age 25-30 where I simply forgot something very specific and noticeable about my life.
Specifically my mom and dad got divorced when I was 4, but then when I was 13-14 my mom remarried another guy for a year, and his two children lived with us. This caused A LOT of drama at the time, and their marriage didn't end great. But I was mostly checked out of the family, living in the basement and not involved. Seems like memorable stuff. We even moved to a bigger house to fit the combined family in it.
But there was a like a 5 year period in my adulthood where I simply forgot my mother had remarried. Or the experience was "misfiled" or something. People would ask if my mother had ever remarried and I would say "no". I didn't have a bad relationship with my stepfather. But for some reason the whole episode was just so ephemerally connected to what I cared about that it just didn't get filed under "mother remarried". Maybe because they got married in Vegas without the kids?
Anyway once my sister heard me say this, and was like "don't you remember Joe and his kids", and I was like "oh yeah, nm I was wrong she DID remarry." And it felt very weird to be telling this small personal detail wrong for years unintentionally.
But like I said I don't think that memory was "repressed" so much as misfiled under "people who lived with us for a while" or something.
I saw the movie Jesus Revolution last night. It's a bit clumsy in its execution, but it covers a really interesting period of Christian revivalism in the 70's. I hadn't realized how much of the sort of modern, "non-denominational" American church had come out of a singular movement. One thing that sort of stuck in my craw, though, was that at the end the film has a card which reads, "Many historians classify the movement as the greatest spiritual awakening in American history."
Like, this is a really bold claim. America has had a pretty good number of Revivals in its history. So I'm curious, if you all had to identify the Greatest Spiritual Awakening in American History, what would it be and what kind of metrics do you even use for something like that?
>a really interesting period of Christian revivalism in the 70's
Jim Jones was an atheist and devout socialist and became a darling revivalist preacher in the late 60s and early 70s. It was a crazy time!
I feel like it’s pretty hard to argue that it can be anything but the Second Great Awakening. Gave us the Jehovah’s Witnesses, the Latter-day Saints, the Adventists—pretty big thing.
Ross Douthat provided a link to the origin of his faith in ‘Why You Can’t Predict the Future of Religion’ NYT piece yesterday.
Https://www.firstthings.com/article/2019/01/grace
Not a believer myself, but with an experience like that…
The first Great Awakening set us off on the trail to independence from Britain, which also seems pretty important.
And gave us Sinners in the Hand of an Angry God
Good point! I was mostly thinking about religious impact but political impact is a good lens too. You could similarly try to connect the Second with temperance and antislavery.
A review of sorts: https://www.firstthings.com/web-exclusives/2023/02/revolutionary-christian-cinema
> Jesus Revolution, which releases in theaters today, tells the story of the southern California “Jesus Movement.” The movement left in its tie-dyed wake several quasi-denominations (including Calvary Chapel and the Vineyard), the groundwork for the contemporary Christian music industry, and a host of figures still notable today. It is a tale ripe for the excesses of made-by-evangelicals filmmaking, where drama often morphs into preachy melodrama. To their credit, the filmmaking team—led by Jon Erwin, Andrew Erwin, and Kevin Downes—largely resists those temptations. With solid acting, appropriate pacing, and general adherence to the maxim “show, don’t tell,” the result is an effort that is not cringe-worthy but worthy of support from Christians of all stripes.
It's got Kelsey Grammar in it if you like him.
I can't say I agree on the pacing, I thought it was a pretty messy script in that sense. I think it struggled to juggle three separate protagonists and their character arcs, and would have been better served by making the movement itself the protagonist. Instead it ends up being almost a highlight reel of the significant events within the three men's lives, and feels like it's constantly rushing to make sure that all of the pivotal moments in their lives are fairly represented. Many of these moments are definitely moving, and would be hard to cut from a script, but the pacing suffers for it.
Felt like there was a lot of bullshit flying around the whole NoFap movement so I thought I'd give it a try for a couple months to see what it was all about. I mostly found it vastly overrated, but there were some interesting changes.
1. My sex drive has diminished.
Most people talk about having a higher sex drive after stopping porn, but the opposite happened to me. Less thinking about sex is super useful, however.
2. I'm more attracted to my partner.
Not that I wasn't attracted to her before, but I guess looking at gorgeous pornstars less often has helped me appreciate how attractive she is?
3. Some anhedonia in the early days.
I know this is the feeling some people get after going cold turkey on something addictive, but it still came out of nowhere. Is porn really as addictive as hard drugs? I could easily avoid using it if I wanted to, and it never interfered with my everyday life. Yet I still suffered withdrawal-like symptoms. Maybe it's a more potent stimulus than I thought.
4. It's definitely easier to focus and get shit done.
This is probably the only thing highlighted by the NoFap movement that proved true for me. Being less horny and thinking about sex less often has made me far more engaged in my work. This was accompanied by a broader motivation to improve various aspects of my life (be more effective at work, more exercise, better nutrition, etc).
I realise this newfound motivation is one of those things that's confounded by me making quite a significant life-change in ceasing porn use. Maybe it's just nudged my brain out of some local minimum, in the same way shock therapy appears to treat depression. Either way, the change is welcome and I don't want to look a gift horse in the mouth.
There's a distinction between no-fap and no-porn. To an extent, arousal begets arousal, so it's not surprising that long stretches of low sexual activity would diminish sex drive. There's a personal balance to be found with masturbation and it probably depends on your refractory period and age. A propos, I can't remember if masturbation improves performance, but I think that the current "common knowledge" is that it does in moderation, if you do it right.
Excess porn consumption (even if you're just watching and not jerking off) can lower baseline dopamine levels. I think some people are more susceptible to addiction in this area than others, particularly if they indulge fetishism and more extreme content.
I have always thought that being undersexed keeps me as a man more focused and attentive and "trying to prove something to the world". Puts my psychology more in a place where I feel like I need to improve my place in the world and take chances and focus.
On the opposite hand being oversexed makes me more lazy, unambitious, conservative, and hedonistic. Despite knowing all that I still have sex as often as the marriage allows and masturbate regularly and it is great. Orgasms are amazing.
Amen to that!
Glad that it worked for you. I would expect different outcomes for different people, based on different sex drive, availability of actual sex, and maybe how stimulating is their everyday environment.
Yeah if I wasn't having actual sex either I doubt the last two months would've been bearable
There's a fair bit of confusion for most between "sex drive" and "compulsive behavior". The difference is obvious once you think about it, in the same way that being hungry is different than eating when bored or stressed. If nobody knew the difference between those last two and quit stress eating, the experience might get parsed linguistically as "wow i'm so much less hungry these days".
I did think about this explanation, but what led me to frame it as diminished sex drive is because that felt more consistent with my experience.
Even though I felt more attracted to my partner ex-post, my desire to initiate sex was still higher ex-ante.
Also, and I know this might sound like bullshit, but it made me look at women slightly differently. Like I still look at TnA as much as the next guy, but my level of arousal after looking at it was much lower. I see how this might be explained by the kind of "compulsive behaviour" you mentioned, however.
You may still be right, but it's hard for me to rationalize why the urge to have sex with my partner was higher before stopping porn? I guess it's possible that I'm still recovering from porn use, and the withdrawal symptoms have yet to go away? Or may be using porn for so long has led to a kind of permanent depression of non-porn related sex drive?
> it's hard for me to rationalize why the urge to have sex with my partner was higher before stopping porn?
Seems straightforward to me: Watching porn teaches you to think about sex often. (Just like reading Facebook/Reddit teaches you to think about Facebook/Reddit often.)
I will make an analogy with food. The simple model is something like "hunger makes me want to eat". But the actual model is more like "hunger makes me *think* about food; thinking about food makes me want to eat". The difference is that in the latter model, there are also *other* things that can make me think about food... such as seeing food, or talking about food. So if I started watching pictures of delicious food in my free time, I would expect to desire the actual food more often.
How much of the changes do you think were about the masturbation itself, vs eliminating porn use? Eg, what would the results be if you continued to fap, but with no (or much tamer) visual aids.
It's an interesting counterfactual and I suspect the increased attraction to my partner had more to do with porn use than with masturbation. Have no idea how it would effect the other outcomes. Maybe it would mean less withdrawal symptoms?
Scott - given some of what you've commented about AI alignment recently, especially in the 2018-predictions-grading post (that is, current AI being more humanlike than expected), I want to ask if you've read about Shard Theory, and what you think about it if so. It describes an approach to AI alignment that builds on imitating human values, i.e.:
https://www.lesswrong.com/posts/iCfdcxiyr2Kj8m8mT/the-shard-theory-of-human-values
https://www.lesswrong.com/s/nyEFg3AuJpdAozmoX/p/pdaGN6pQyQarFHXF4
https://www.lesswrong.com/posts/gHefoxiznGfsbiAu9/inner-and-outer-alignment-decompose-one-hard-problem-into
I mean, I imagine the answer is 'yes, of course I've seen it and my comments are informed by it', but I'm not that certain about the social dynamics, so it seemed a valuable thing to link just in case. The approach seems obviously right to me, personally, at least given that alignment-by-utility-function is so far a failure.
Someone tried to explain it to me tonight. I have it on my list of things to look into harder.
I know you asked Scott, not me, but I think any approach that tries to install in the AI some version of what governs human values and human behavior would be a disaster. Look at what people are capable of, for fuck's sake. Read the news. Read the savage and crazy comments on Twitter. Most people have some failures to align with the good of humankind. Many people have enormous failures to do so.
I think most human failures on morality reduce to a denial of certain inconvenient facts. As long as truth is highly valued, it might work.
Why would someone break into an apartment, rearrange stuff in the bathroom, then leave without stealing anything? Asking for a friend.
The incident appears to have been caused by the confluence of a lost maintenance guy and a malfunctioning cell phone.
Looking for drugs and knocked some shit over, and put it back wrong in a panic would be my first guess.
I once had someone break into my car, move everything inside around a bit, and then just leave it all right there despite there being some minor valuables etc.
I think they were hoping for a laptop or something.
Current hypothesis: the condo association decided to check the plumbing and was negligent in notifying.
My sketchy ex copied my keys without my knowledge, and once while I was our of town used my living room as a staging area to pack for a trip (her place was tiny). Over-tidied afterwards and I noticed immediately. Freaked me out for a bit until I figured out what had happened.
I had the locks changed after a breakup for exactly that reason!
First, you have to find out did that really happen. Back in my social housing days, we had a client who was fine when on their meds for schizophrenia, but used to regularly go off them, and one of their delusions was precisely that: people breaking into their house and moving things around. We used to regularly send workmen out to change the locks for them, not that we believed it happened, but it was an easy and relatively cheap way to keep the client happy for a bit (until the next set of delusions kicked in).
Not saying your friend has mental problems, but first make sure it *really* happened.
Lo, the time has come for the traditional incantation: "You gotsta believe me!"
Feels like you can set up nanny cams if you're expecting it to happen again.
Israeli Mossad used to do that as a warning, to people who were on their hit list. Of course, their hit list was pretty inaccurate and they murdered a lot of innocent people. Oh well!
Stalin used to ask people, “You don’t have a pistol? You’re not going to kill your self are you?”
When the reply was “No” Stalin would say, “Good, good.”
It was meant as a hint.
It's so tiresome to see people who don't think that countries have intelligence agencies. They did that to Gerald Bull, who was working on a V-3 style supergun for Iraq.
>During the months before the assassination, Bull’s apartment suffered several non-robbery break-ins, probably as a threat or a warning.
>On March 22 1990, he was assassinated. Bull was shot (7.62 mm) twice in the neck and 3 times in the back at point blank range while approaching the door of his apartment in Brussels.
>The gun was silenced. No one heard the shots or witnessed the assassination. The killer took the time to collect the bullet-casings.
>When police arrived at the scene, they found the key still in his door and his unopened briefcase containing nearly $20,000 in cash. The assassin was never identified.
His murder has been covered by that famous chemtrail and hollow earth conspiracy theorist show, PBS Frontline.
Agreed. Hopefully someday we can discover what country was responsible for Bull's murder and hold that country and its agents accountable for their crime.
Does your friend have a carbon monoxide detector? There was a famous reddit post of someone who had that kind of stuff happening and it turned out he was suffering from carbon monoxide poisoning.
That was my first thought too.
Could be your cat Brendon. Mine do that shit all the time.
Sadly, the cat in my profile pic has been dead for about 10 years.
GHOST CAT!
What does your mental health professional say?
Any suggestions for how to pick a book for the book review contest?
Check out the lists of previous finalists.
And then pick something different, because I am so bored of recent pop-non-fic. Do Pride and Prejudice! Do The Very Hungry Caterpillar! Do a 2004-edition Lonely Planet guidebook to Peru!
I think the winning formula is "classic which everyone has heard about but nobody has read."
Finnegan’s Wake!
I think the winning formula is "classic which everyone has heard about *and wants to read* but nobody has read"
I’m not sure if “completely made up interpretation” or “review written in the same style” would be more entertaining
I have 3 subscriptions to Razib Khan's Unsupervised Learning to give away. Either reply with an email address, or email me at the one I specify here: https://entitledtoanopinion.wordpress.com/about
I'll take another!
I need an email address to send you one.
I’ll take one.
closed.limelike.curves@gmail.com
Sent.
Thank you!
What do you think about the National ADHD medication shortage? It seems to be impacting many many people. Is it worse in big cities? What is causing it? What are the results going to be for so many people to suddenly be forced off their medications? Why must adhd meds be treated so intensely? Is there anywhere on earth where these meds can be purchased otc? It seems really unfair to make it so hard for people with adhd to get effective treatment.
The regulatory causes are, of course, infuriating but they are to be expected. I am most annoyed with the pharmacies that know there is an issue, but seem to have no interest in being on the side of their customers. I know they can't solve the shortage, but they could at least be more proactive about communicating about what they have in stock, when they expected to get stock, etc. I used to get dozens of tests about my prescriptions from CVS, now i just get a text confirming they received the script then nothing, not even a text saying they don't have stock.
"Why must adhd meds be treated so intensely?"
If some people must suffer in order to prevent wrongfun, so be it.
-Carrie Nation
“Wrongfun?” Are we in 1984? Why should the government control if I enhance my performance?
If that’s the case then shut down Viagra and ED drugs.
You know you're talking about a government that poisons ethanol because it's better that some people should die instead of getting drunk, right?
Why do you think your personal issue should somehow be exempt from the total state?
It'll be interesting to see if cold-turkeying/tapering a nation of Adderall addicts will have lasting health benefits. As a general rule of thumb, getting sober and reducing drug intake is a good thing. Stimulant abuse in general is probably not good for mental and emotional stability. Hopefully this will have some long lasting benefits.
As I imagine a large number of commenters here would fall into the camp of daily/weekly amphetamine users who are now not happy that their drug of choice has dried up, they may have a different view on this matter. But I'd hope that it causes some people to reevaluate the necessity of their use.
The implications in this comment don't conform to my lived experience as an Adderall user and I don't think they conform to the lived experience of most people with ADHD
>As a general rule of thumb, getting sober and reducing drug intake is a good thing.
Tell that to all the people on antidepressants and antipsychotics. Or in the other direction, all the people who make modern life possible using caffeine to stay productive.
You are improperly applying a heuristic that isn't relevant to the situation. "Drugs" aren't bad, habits that prevent you from being a good member of society are.
Edit: After expanding the collapsed comments, I realized that you were already presented with these exact objections, and dismissed it on the basis that "normal" people gain benefits from it. Scott has already covered this exact objection:
https://slatestarcodex.com/2017/12/28/adderall-risks-much-more-than-you-wanted-to-know/
But to summarize: yeah. So what? Ability to focus or "function" is a normally distributed trait, you can't draw a line between people who "need" it and people who use it as a "performance enhancer" (which is everyone, including those who "need" it, since it treats), and people being better benefits everyone.
Also, you seem to have totally ignored the questions about caffeine, which I keep seeing articles attributing basically every modern convenience to (https://twitter.com/robkhenderson/status/1630230741505155074), and is much more relevant to the discussion.
I ignored most of the replies because it isn't something I feel particularly passionate about, and lots of the other replies did seem to be passionate about it, and I didn't want to cause any unnecessary irritation.
I'm not saying "normal" people gain benefits from it, I'm saying everybody gains benefits from it. As someone who also functions much much better on drugs like Modafinil/Adderall and so on, let alone Nicotine/Caffeine, it seems to me self-evident that people who are in industries and social circles where everybody is using a particular amphetamine performance enhancer to do better at work etc. would be ticked off when supply dries up. I think in general dependency on a substance to reach a baseline level (such as nicotine) is not a good thing.
One of the commenters on the Scott article above lists the following symptom they received:
"A vague, sourceless anxiety that regularly showed up about 8 hours after taking my meds and lasted half an hour to an hour."
This is just a stimulant comedown. I'd be very surprised if long term usage of daily stimulants which are much stronger than a cup of coffee could fail to impact emotional and mental stability. My point was only really supposed to be that it'll be interesting to see if some people come off it permanently as they go cold turkey- probably not.
>I'm not saying "normal" people gain benefits from it, I'm saying everybody gains benefits from it.
Sorry, this is sort of splitting hairs/a failure to communicate effectively on my part. Clearly that's correct (as it's also what Scott says), but I meant that since you don't seem to have a problem with prescription drugs that only benefit the subset with a disorder, your objection to the set of all benefitting must lie with the neurotypical subset of the benefit, modulo those with a disorder. This was probably unnecessarily confusing on my part, sorry.
“Adderall addicts” right. 🤦♀️ it’s every adhd medication now, not just adderall. And what will happen is people will fail out of school, lose jobs, mess up stuff, and possibly get addicted to worse things in an effort to cope. Many will cry. Some people will be fine, sure, but others will be deeply frustrated and probably some will commit suicide.
Would you call depressed people Prozac addicts?
Or how about we shut down all the liquor stores in the country? You cannot convince me that Ritalin is more addictive or dangerous than vodka. We don’t give tequila to middle schoolers for a reason...
It would arguably be safer for every person on earth to take Ritalin every day forever than for people to periodically get drunk, yet the liquor stores are all still open and widely available. Very frustrating.
Do you feel this way about coffee drinkers? Why don’t they get the addict label? Their is a clear stigma against people with adhd on treatment. Yet studies show that the treatment works and is safe. I personally feel it should be available over the counter to adults.
It’s not a performance enhancer for people who need it, it’s a focus and routine/mood stabilizer and it’s ridiculous for it not to be available while alcohol, coffee, and cigarettes remain legal and easy to obtain.
Again, you cannot tell me that alcohol is safer than Ritalin. It’s 100% not. And alcoholics abound, yet we aren’t limiting the supply in any dramatic way.
>Do you feel this way about coffee drinkers? Why don’t they get the addict label?
I don't feel judgemental about any drug users, until it impacts others around them. But sure, we can say "caffeine addict", I myself am one. But we don't use this term only because of the ubiquity of caffeine addiction.
>It’s not a performance enhancer for people who need it, it’s a focus and routine/mood stabilizer
Fang correctly pointed out that it is indeed a performance enhancer. I'd also suggest that taking a drug to be able to follow a routine doesn't seem to be in the same category as e.g. antipsychotic medication. I would benefit (in terms of productivity) greatly from this drug, as would most people.
>and it’s ridiculous for it not to be available while alcohol, coffee, and cigarettes remain legal and easy to obtain.
I'd say adderall is a different beast to coffee and nicotine in terms of magnitude/impact. You're obviously correct re alcohol, but that's the same as most other drugs. And it ignores the deep seated cultural infatuation we've had with booze for the last 6000 years (and caffeine in some places).
>Again, you cannot tell me that alcohol is safer than Ritalin. It’s 100% not. And alcoholics abound, yet we aren’t limiting the supply in any dramatic way.
Maybe we should..? I think society would probably be a better place if the top 10% of alcohol consumers cut consumption by 75%, and the rest by 50%.
> It’s not a performance enhancer for people who need it
I agree with the rest of what you're saying, but this is trivially untrue, because its primary indication is treating a disorder of executive function. (Scott has actually written about this before: section 1 of https://slatestarcodex.com/2017/12/28/adderall-risks-much-more-than-you-wanted-to-know/)
Even if we decide it is a performance enhancer, why should the government limit me from enhancing my performance? I don’t see a Viagra shortage due to the government... There is no need for these extreme restrictions on this.
My bf and I are joking that the government cut the adhd med supply to slow the spread of AI and to slow down big tech advancements. Lol
But thanks for the link, I’ll give that a read. Though my ADHD medication was never adderall, however now all types are out since the adderall shortage caused everyone to switch... 🤦♀️
As I wrote above, I agree that we don't need draconian gatekeeping on prescription stimulants. Just pointing this out because of the community norm of not saying untrue things, even in defense of good points.
There* see even my typos have increased since this “cold Turkey” experiment. Ugh. So frustrating.
You know if you tap on the ellipses to the right of Reply, Gift a Subscription, Collapse, an option to Edit pops up, don't you?
I’m on my iphone and that is actually not an option. Maybe it only works on a computer?
It might have sounded a little flippant due to our negative connotations, but I'm not sure what else you'd call it. The west also has vast amounts of functioning alcoholics/caffeine addicts, but if you'd prefer to say "adderall dependent" then go ahead.
The difference of course with prozac is that many (or most) people on adderall don't need it to function but instead use it as a performance enhancer (see finance, law, tech). There are many people who are unnecessarily prescribed Prozac, but I don't see what benefit they gain from it. On the other hand there are millions unnecessarily prescribed adderall who gain a lot from it.
I don't think think it is a good thing that it has happened in this manner, but the US has got itself into this mess and I'm not sure what the best way of going about getting (nearly) everybody off it would be. Prohibition has a lot of nasty side effects but one thing it is very good at is lowering general usage. It will be interesting to see if this temporary shock results in longer term usage reduction.
Why is the government doing this?
Do they think they are solving a problem?
They are absolutely solving a problem.
That problem being a decrease in price caused by a theoretical increase in production.
Ah, the drug cartels win again
Well, it’s not working because no one raised the price. I’d happily pay more, but it’s just zero percent available at any price.
Incompetence is my only theory. Apparently the pharmaceutical companies applied to increase manufacturing to meet consumer demand and the DEA denied the request. Which in my opinion, the DEA should no longer exist and certainly shouldn’t be in control of American’s mental health.
At first I was worried about the ADHD medication shortage but then I thought ooh a bird
Yep.
Public service announcement: If you are having trouble getting your adderall script filled, ask the prescriber to switch you over to dextroamphetamine for a while. It's almost the same stuff and last I knew there was not a shortage.
I've had luck switching between XR and IR depending on what the pharmacy has in stock. Luckily either version works of me, though it may not for others. I also only get 30 day supplies at a time so switching between the two has less impact.
If they can't fill my script, i just ask the pharmacy what they have in stock and have my provider change it to match. I am lucky to have an NP that is willing to do this.
I’m in south Florida and every substitute is no longer available except for the norepinephrine ones like Phentermine and Strattera, which in my opinion are not substitutes at all. It’s so messed up.
You tried asking for dextroamphetamine? Brand name is Dexadrine. It's not used much. Some docs haven't even heard of it, others are fearful it will lead to addiction. In fact, though, it's very similar to adderall, and a component of adderall. Adderall is 50% dextroamphetamine, 50% plain amphetamine.
I called six pharmacies in the area and asked them to tell me every Adhd medication that they had that was available, that wasn’t back ordered. That wasn’t one anyone had. But maybe it could be ordered.
It’s very challenging because here in Florida every subscription has to be sent electronically, and can only be sent in every 30 days, so you’re left calling around to check availability then calling your doctor back to let them know where and what to submit, because they can’t search all around town for every patient.
It’s really out of control. For example, Walmart pharmacy said they haven’t had any ADHD stimulants available for the past 3 months, aside from Strattera.
A compounding pharmacy said they just aren’t even taking adhd prescriptions anymore because they take so long to fill that their phone lines are clogged up by patients calling and calling to check on the shortage...
Walgreens and CVS were doing better, getting shipments of methylphenidate in periodically, but now there is a long waiting list and no shipments for a month for anything time release.
Or at least this is what I was told...
Wow, that sounds maddening. You might call pharmacies and ask if dexadrine (brand name) or dextroamphetamine is available. It's used so rarely that if you ask about "any ADD med" they may literally not even think of the stuff; Also helps to be willing to buy in size pill, and cut it in pieces if dose too large, take several if too small.
I basically gave up and have been letting my life descend into disorganization. Taking fish oil etc. Trying my best, but it’s extremely not ideal, especially to just suddenly be cut off.
Every prescription* I mean. I wish it was as simple as a subscription!
As a Muslim with a deep sense and reading of history and technology. I find critical thinking and logic as a way to distinguish truth from falsehood. This advice is repeated multiple times in our holy book.
Also another thing is that sometimes you can not decide on your own, because you are not an expert in every thing so asking experts critical questions and getting feedback is most important which most people don't bother to do. And read upon the relevant subject matter. Mass media and TV forget about any Truth in them. Read a lot, think critically, talk with experts get feedback, repeat the loop.
'I find critical thinking and logic as a way to distinguish truth from falsehood. This advice is repeated multiple times in our holy book.'
I find it hard to believe this is not trolling. Applying critical thinking and logic to any religion leaves very little in place.
Amusingly, applying critical thinking and logic to anything leaves very little in place.
https://en.wikipedia.org/wiki/Islamic_Golden_Age
What does this have to do with anything?
Please see my response to Carlos. This is not trolling, I am a Muslim, and this emphasis on critical thinking and logic is not at odds with my religion or it's foundation. Yes a group religious persons in Islam might say their is no place for critical thinking but more than 95 percent has the same views that critical thinking and reasoning is a foundation of Islam.
So you stop applying the critical thinking and reasoning when it comes to the concept of religion?
That's odd advice if it's stated so baldly in the Koran. Religions are not reasonable, they appeal to faculties other than reason. Using critical thinking and logic on the Koran would just cause you dismiss the whole thing.
I would differ and point this is a false assumption. If the Quran was so easily dismissed we would see 1000 arguments and reasons against it but their is none.
Also I would give an example so the chapter in Quran about people of the cave, a group of Young people who slept for 300 years in a cave away from persecution of the idolater govt, do you know what Quran quotes about their speech " why would not they i.e govt would give us a clear proof of the idols being Gods, we would worship the one who created heaven and the earth" . If the God who gave us Quran fears the logic and reason why would he tell us the foundation of a story is that their is no clear proof about a certain fact and people want to impose that on you. For the religious choice this is exact quote from Quran " there is no compulsion in religion" , and " your job o prophet is to just give the message, god is seeing the actions of his people".
It is trivial to dismiss the Quran, most of humanity does it. The atheists in particular really pick it apart. Now, I do think Muhammad was the last Prophet of God (though I do think the Quran is not perfect), but I realize this isn't a rational position.
"There is no compulsion in religion"? What about all the conquests and Muhammad killing poets that made fun of him?
Muslims conquered vast areas of the world, by and large other than maybe a few historical events where animosity or revenge would have involved they never compelled societies ( which itself would invoke punishment of God for those Muslims who do it ). An example is india and africa , jews and christians lived in muslim areas and were never compelled.
And to answer your objection on the verse on one hand side and the conquests on the other hand side, I would first tell the divine law which operates when a prophet is sent to a people, he gives them the message, the time and then God decides the result in the end, the Prophet Muhammad spent almost 20 years giving message of Quran to his people, he never compelled people to believe it in this period, it was always the message, he did not try to take a divine army the second day and come punish everyone the next day of the message to his people, 20 years he did what God commanded him to do and that was just give them the message, in the end God decided of the punishment which was in that case taking over the city of Makkah from the idol worshipper and banning them from ever entering in that city, The Prophet did not killed the conquered people or nor he punished them, in case of other prophets like Noah , Moses , Lot etc their people were destroyed as a punishment in the end.
Regarding the conquests and the reports of killing some poets, the Prophet let go of some who made fun of him and pardoned them, but the others who were killed, we have to remember their was a functioning society, their was animosity and revenge and open war so to protect the people of the state certain action were taken against people who were harming Muslims and these poets were doing that job these were no simple poets like today, they were sort of a power structure in Arabian society and had influenced for harming Muslims through insult of the Prophet and instigating , as a ruler of a state, sometime these actions were necessary so that people will not cause chaos and harm in society against followers of the Prophet. The Prophet pardoned the killer of his uncle, he pardoned the Maccan people , he never punished those who sometimes talked with him harshly and were rough, so we have to look this as a whole and then this would become clear.
Again regarding conquests, their was divine law for people of Arabia, they were being given the message, after its completion of the time, God willed to take the power away from them as a punishment, but no one in Arabia was compelled , those who fought against Muslims and lost lived their , some converted , others didn't and gave tax sometimes but not always. So it is a large picture we should not overlook. When Muslims took over the jerusalam, the christian lived their for centuries, even jews came and lived their. So during the life time of Prophet, their was a divine law which gave time to his people and told them the message and then came the results, and for us the remaining humanity, their is again the divine law , their is time for people, their is message and the results/consequence will be in the next life. Their is no compulsion, does not mean their will be no consequences in divine law, it means it is the choice of free will and their is time period to decide or reject. Sorry it was a long answer but the question require a large picture.
Mass media don't forget the truth, they erect an electric fence around the truth. Touch the fence, you get shocked.
Wuhan COVID lab leak theory, Hunter Biden's laptop, Climate Crisis ... why is there an electric fence, but to shield the truth.
A Muslim coworker gifted a Koran to me. So far it’s been the most difficult holy book I’ve read. I’m referring to the required repeated praising of God. I have trouble getting past that.
The repeated praises are not only for ryhmic beauty and majestic experience but have wisdom and lesson in them. For example when the passage ends with God is wise and all knowing, and sometimes ends with God is all Knowing and is most wise. Scholars tried to understand why God changed the words often like that and what they found is the context of previous verse shows which one would be more appropriate and hence we find those praises accordingly so their is a huge amount of learning and beauty in those.
I grew up reading the Qur'an. The repeated praising of God was enjoyable, I looked forward to it. I say this as someone who otherwise has a strong intrinsic preference for content novelty.
Awhile ago Scott reviewed the book Origin Of Consciousness In The Breakdown Of The Bicameral Mind. I saw something recently on reddit, something called the "third man factor" which is when people in stressful, dangerous situations like car crashes often feel, hear, or even see a seeming presence of another person who guides them through the crisis.
Most famously apparently by Ernest Shackleton, when he was hiking across South Georgia island to be rescued along with 2 others at the very end of their disastrous expedition, later said that he often felt like there were 4 people there instead of 3. And didn't say anything about it to the others until later one both of them confided feeling the same thing.
And the reddit thread is filled with people saying "yeah, that happened to me, one time years ago I was in a car crash..."
Seems like it fits the Jaynes theory. I'm not sure it fits Scott's alternative version of "'consciousness' really means 'theory of mind'" because presumably the people involved in these situations, like Shackleton, have a modern "theory of mind" and usually no preexisting mental illness. But I don't know enough about it to say. It's also hard to say what "theory of mind" people have in those situations without having been in it. Do they think that they have "their" internal monologue and the "presence" as separate entities? Or has their own internal monologue been "shut off" temporarily while the presence is there?
https://slatestarcodex.com/2020/06/01/book-review-origin-of-consciousness-in-the-breakdown-of-the-bicameral-mind/
https://en.wikipedia.org/wiki/Third_man_factor
The evidence for self-awareness (identifying with inner voice) before 40k years is not great. Maybe that date is closer to 12k years, depending on how seriously you take the Sapient Paradox. If our psychology is recent, then absolutely the primeval state could be exposed in times of stress.
I wrote a bit about how a bicameral mind could evolve as a result of reciprocal altruism. Don't think it could have been 3,200 years ago though. https://vectors.substack.com/p/consequences-of-conscience
I read the "Third Man Factor" after reading Scott's review of "origin of consciousness" and also Kaj Sotala's posts on LessWrong about the muliti-agent theory of mind (and Internal Family Systems therapy). I've read more on all these since, and my conclusion is that they're all discussing different aspects of how multiplicitous the complex systems we try to describe with the one word "mind" is.
This sounds like a benign, fleeting version of Dissociative Identity Disorder, which is hypothesized to be a defense mechanism against trauma, such that the victim can close themselves off to reality and make it feel as though this is happening to someone else. They might unconsciously externalize certain functions to the 'third man' who keeps a cool head even when you are panicking and overwhelmed, and once they're back to safety, they resume having an undivided identity again.
This has some parallels with PTSD: people who experience trauma do not always develop PTSD, they can either process the trauma and manage to adjust back to normality, or they can become stuck in that state of mind and unconsciously expect danger even in safety. There might be a similar mechanism for DID, that dissociating is a normal response to extreme stress that only creates DID under certain circumstances.
In the case of Shackleton, it would not be the least bit surprising. The number of life-or-death decisions he had to make correctly for the outcome that resulted is staggering.
Has anyone else experienced an identity crisis because of recent developments in AI? I spend a lot of my professional hours providing my perspective on business issues. I’ve always known there is nothing I say that some other smart person who spent a lot of time reading about the subject would say. But I was clearly contributing to overall GDP by learning and synthesizing that learning in the right setting. LLMs do a really great job of synthesizing information, and this is starting to impact my overall curiosity because I see less value in compounding knowledge in my head.
Don't give up quite yet:
https://neurosciencenews.com/infant-psychology-ai-22543/
Unless you think there is zero understanding of psychology required for your work, you're probably good to go for quite a while.
Although, on the other hand, it's certainly true that a future AI might plagiarize what you've painstakingly assembled, either directly or as a mash-up, to answer someone's inquiry, so you might want to make common cause with the digital artists who are similarly horrified.
I mean no. I do something similar, and I just don't see a threat in the near future. So much of what I do in my consulting is handholding and therapy and emotional manipulation and just person-to-person forced accountability. None of which an AI is great at.
And AI will be great at getting a 95% accurate version of what the current rules/law/regulations are. But I am already decent at that, it might take the AI 3 minutes (you need to format a query), and me an hour.
But all the hard work and what I am paid for is actually getting people to do those things in the manuals/rules. My clients can typically already read those themselves if they want (they don't).
Plus I am providing my understanding and experience with the unwritten rules/behaviors/practices in the field, something the AI will have zero access to for quite a long time.
I expect demand for my type of work to go down very slightly, but also my work to get somewhat easier for me (which could lead to more competition from human competitors).
I think persuasion will remain human-centric for a while. Persuasion drives a huge amount of GDP.
Now, now Fully Un -- you've been a good comment bot and while the GPT-based versions that replace you will be better, that can't take away any of your past glories. Your virtual presence here and at SSC will live forever as an important chapter in the development of artiicial minds. Well done, old soldier!
I have very bleak feelings about some parts of it: The idea of AI churning out lots of generic, cheesy story books for kids really depresses me. But then the idea of AI getting *good* in the way my favorite writers are good distresses me even more. Wonderful writers have been very important to me. Being able to write wonderfully well comes closer than anything else I can think of from capturing my sense of what it is to live an acute, meaningful life. When I think about AI being able to write that way, it's like the rug is pulled out from under me. I'm not exactly *against* it, but it makes me feel lost and despairing.
Can *you* write like your favorite writers? You're about a billion times more talented than any existing or plausibly forseeable AI, plus you have the benefit of intentionality. Can you just tell yourself "write like that!" with ever so many examples at hand -- and succeed? Could you do it even if you had years to devote to reading every word a given author had ever written? If not, why do you think an AI could?
The reason to worry about AI authors is that AI is already being used to write fiction.
See here: https://www.theverge.com/c/23194235/ai-fiction-writing-amazon-kindle-sudowrite-jasper
The AI is not writing an entire story here, just assisting. My boss likes to use the general distinction between "auto pilot" and "power assist" and the thing in the link is still in the "power assist" category. It is, however, writing complete paragraphs and I don't find it unreasonable to see the length (and quality) of the passages grow over the next decade.
Why is that a worry? If you were a carpenter, would you worry about the advent of cheap power saws? Or would you say "yippee! I can get the important work done a lotfaster without having to allocate X hours of drudgery hand sawing?"
When AIs write a sequel to "Pride and Prejudice" that is not only like the original in flavor and tone, but also creative and interesting, as if Austen came back from the dead, then I would ponder things. As it is, not even gifted human beings can pull off that particular trick. So my expectation that a human being can write a computer program to do it is close to zero.
I think we (you :-)) might be losing track of the actors here.
Eremolalas seemed worried about AI writing story books for kids.
You seemed to think that AI generated stories was a long way in the future (unless I mis-understood your response to Eremolalas).
I chimed in that AI is already being used to write fiction, so it may not be all that long before Eremolalas' fears are realized.
And you have just replied (to me) suggesting that this shouldn't be a worry.
*I* am not worried. Eremolalas is. And I don't think he/she/it will feel less lost and despairing with the power-tool analogy (though I think that IS a reasonable analogy).
My read is that Eremolalas was worried about AI writing *literature*. Good stories, stories that amaze and astonish. I don't think he or she is worried about AIs writing schlock, student C+ essays, corporate PR releases.
AI is just math, which is this very mysterious thing that has lots to do with the truth. It's not exactly awful that something as pure as math is potentially capable of producing deep art.
I still think it can't take over that though, because it's missing the communication aspect of real art.
"I have very bleak feelings about some parts of it: The idea of AI churning out lots of generic, cheesy story books for kids really depresses me."
In the book "The Diamond Age" one of the background details to the story is an Artificial intelligence (called pseudo-intelligence in the book) creating children's stories tailored to the young protagonist. Once we have a machine/program/bot doing this then there is no reason for the stories to be particularly generic (though they will very likely have a lot of tropes ... which is pretty common even for human hand crafted children's stories).
Yes, I read the Diamond Age. I don't see any reason to expect that that things will play out the way that they did in that work of fiction. I think they're just as likely to resemble the plot of one of my favorite works of fiction, Captain Underpants and the Attack of the Talking Toilets. (No I'm not joking):. https://www.amazon.com/Captain-Underpants-Attack-Talking-Toilets/dp/0545385679/ref=tmm_pap_swatch_0?_encoding=UTF8&qid=1677455890&sr=1-1
No, I spent some time with GPT on stuff that I have a professional background in, and found that even if _gently_ probed it will demonstrate it has no clue what it's talking about. It's better than a magic 8 ball since its word salad might point in the right direction for a google search if you know absolutely nothing on a topic, but it is no substitute for having deep domain knowledge that is applied to a specific situation. Of course, I'm of the opinion that LLMs will show massively diminishing returns in subsequent iterations because they don't do anything that can actually be called "understanding", though I've read some arguments to the contrary (we will see!).
While I understand your concern, I don't think it's necessarily right to imagine LLMs as replacing your utility.
If you want something practical, start practicing how to integrate e.g. ChatGPT into your workflow. Being someone who's good at writing prompts and understands when to use LLMs (including what they're good at, what they're weaker at, and what value you can add) is part of what it means to be good at learning and synthesizing. Your coworkers who haven't realized this will likely be less valuable in the future, but you can be on that cutting edge.
Now you know the cold chill those West Virginia coal miners felt. I wonder, as an upper-middle class highly educated professional, do your views on rural people resemble the rest of your cohort in society?
I think it's good to be empathetic with other cohorts of society, even if I can never truly feel how they feel. But I am certain that I would never ask the government to enact laws against automation or boycott progress. As much as I'm anxious about the future, I welcome it with open arms.
You didn't actually answer the question.
As I remember, their situation was met with a great deal of amusement and contempt, as they were told to learn to code.
First they came for the laborers
And I did not speak out…
Automated textile mills? No way!
This comment doesn't feel very charitable and shared-understanding-building, just needlessly provocative... less of this please?
I see a lot of my work value as coming from synthesis too, and I'm scared of LLMs rendering me irrelevant to decision-makers. At the same time, playing around with LLMs has actually increased my curiosity; I can't stop asking them for 30-second summaries of various topics for instance, knowing full well they spout occasional factual inaccuracies etc.
I bet this is not uncommon among people who are paying attention to this stuff. I'm in a pretty analogous situation, and I have a couple of thoughts on this.
One, I believe pretty strongly in the value of what I call "mental furniture." Although I love looking stuff up on the internet, I've realized that it's no substitute for having enough facts and insights stored in my brain to be able to interpret what I'm reading. I've found that facts as basic as the population and median income of the US or life expectancy by income give me an ability to interpret new related information much more quickly and accurately than someone who doesn't know anything. So although now all the facts are available out there, there's still a ton of value in knowing stuff. Arguably even more so because it allows you to know what to look for in external sources.
Two, LLM responses to mediocre prompts are pretty mediocre. I tend to snicker at prompt engineering per se, but the broader idea of how to frame a question and how to conduct a dialogue to get to a good answer is incredibly valuable. I'm sure you've found specifying the question to be very helpful in your work already, and now it's helpful working with an LLM as well. Another good way to think about this is that now you're a manager. You have a team of interns working for you, and managing them with good research requests will determine how productive they are.
So, there can be a feeling of dislocation, sure, but I think you'll find once you get used to it that your skills and curiosity are at least as useful as before.
Looking over the question again, I would add that you may still need to revisit how you approach your job. If the perspectives you offer are relatively straightforward, I don't think you should give up on curiosity, but you may want to start heading for higher ground proactively from a career perspective.
"Has anyone else experienced an identity crisis because of recent developments in AI? I spend a lot of my professional hours providing my perspective on business issues. I’ve always known there is nothing I say that some other smart person who spent a lot of time reading about the subject would say."
I have not (yet), but I've read a comment on another site I frequent that could be accurately identified as a post from someone having an identity crisis. So you aren't alone.
"LLMs do a really great job of synthesizing information, and this is starting to impact my overall curiosity because I see less value in compounding knowledge in my head. "
I wouldn't give up yet. The history of AII is that the first X% seems pretty easy, "everyone" extrapolates and then it slowly becomes clear that the progress slope gets VERY STEEP VERY FAST. ChatGPT is wrong often enough to be of concern. I'm not convinced that just making larger models will fix this problem.
Have you tried Bing AI? I have been testing the beta release and it seems more reliable than ChatGPT. It provides real and accurate URLs to resources that it mentions for example.
It has also managed to answer obscure questions about my local neighbourhood that Google totally failed to provide a useful answer for.
I'm surprised not to see much commentary on the use of large language models/generative AI to make probabilistic forecasts, whether for contests like Scott's (and my own) or for prediction markets. In my own analysis I found a 0.68 correlation between Bing/Sidney's forecasts and the ensemble forecast of my historically most accurate forecasters. Is anyone else doing this sort of work? I'd love to read about it.
On my next Mantic Monday post I plan to talk about https://arxiv.org/abs/2206.15474
Also, if you haven't done this yet, I highly recommend you feed the text of your 50-question contest—or at least the subset of props not yet resolved—straight into Bing/Sydney and compare the forecasts with some ensemble (or the straight group average). I think it will correlate above 0.5.
Looking forward to it. It's amazing to think about how much the "ForecastTech" world has improved since Zou et al. in June 2022 to my simple little at-home experiment in Feb 2023!
I'd love to hear more about how you frame forecast tasks for LLMs.
In this case, I did it in the simplest possible way: by pasting the text of the prop into the Bing chat window (with a little preamble). The results that came back were decent-looking in the aggregate but kind of wild in the specific, so I went on to suggest another set of prompts that could equip a human forecaster to make a decent hybrid forecast: https://braff.co/advice/f/driving-miss-sydney
Thanks for sharing. I like the analogy to autonomous driving levels. It also occurs to me that it's a bit like managing an employee depending on their level of task maturity.
Regarding the actual steps you took, I wonder if you could productively reverse them a bit and have Bing come up with the wild estimate first and then feed in the various intermediate prompts as ways for it to critique and revise its initial estimate, with the goal of getting a level 3/4 answer.
Hypothesis-driven problem-solving seems to work better with rationalists than with Sydney. She gets quite attached to the first thing she says, and it's hard to get her off of it, though if you change a parameter (like the time to resolution) she does amend the answer.
Small information request; does anyone have thoughts or resources regarding answers to the following questions?:
- Does LSD / psychedelics affect people with Autism differently? and if so, how differently?
- Does LSD alter the composition of the gut microbiome to a large extent? and if so, by how much?
Research: There's some low-quality evidence that LSD and MDMA may improve emotional responsiveness and sociability in some autistic people. https://www.spectrumnews.org/news/tripping-over-the-potential-of-psychedelics-for-autism/
Anecdotal: I personally can't tolerate either LSD or MDMA because of physical side effects (muscle spasms, tremors, jaw clenching, fever, vomiting), which are similar to but less severe than the symptoms I experienced in serotonin syndrome from an antidepressant. It's hard to enjoy (or identify) any social benefits when you're writhing feverishly in bed or shaking and vomiting on the bathroom floor.
But since neuroscience research says autistic people tend to have *low* serotonin levels, my unusual sensitivity to serotonergic drugs suggests that I may not be representative, and also maybe that my autism diagnosis is wrong?
If you're more sensitive, maybe you just need a much lower dose for the same effects?
With LSD, I actually tried microdosing first because I was worried about my reaction. I'm definitely not hypersensitive to the psychological effects.
At 25mcg, the physical effects were pretty tolerable - mostly just jaw-clenching - but the mental effects were barely noticeable. I thought I was better at seeing patterns/pictures in noise, and I seemed to be a bit more sensitive to the illusion of movement that you get when you stare at something for a while, but definitely nothing mindblowing.
50mcg was the same thing but stronger, plus a mild euphoria and some muscle tension and tremors.
Then 100mcg was the full-on painful spasms, high fever, GI symptoms, etc. There were definitely some interesting mental effects, but still nothing crazy. No deep meanings or universal truths to be found. I'd mostly describe it as enhanced pattern-recognition and an altered sense of time.
I haven't tried lower-dose MDMA because the full dose was just such a uniformly horrible experience that I have no interest in repeating any part of it.
I have been considering trying psilocybin, though, to see if maybe it hits slightly differently than LSD.
The reported differences I've seen between psilocybin and LSD are that LSD is more cerebral, intellectual, while psilocybin is more "embodied", as in, you feel it more in your body than LSD.
That's the first time I've heard a terrible experience with MDMA. Did you test for purity?
Yes, we tested it.
It's so weird. MDMA puts thousands of people in the hospital every year and kills a few, but people are less willing to believe that you had a bad reaction to it than they are to believe you had a bad reaction to weed or even caffeine.
MDMA hospitalizations and deaths are pretty much all due to contaminated drugs or dehydration. That's not what you're describing. I'm not saying it's outside the realm of possibility, but it's certainly the first time I'm hearing of such a case.
I would think intolerance to the bodily effects of lsd and mdma is more evidence for an autism diagnosis because autistic people are known to struggle with interception and hypersensitivity.
Maybe in general. In my specific case, I'm not talking about subjective hypersensitivity to harmless interoceptive signals; I'm talking about being hospitalized for a rare life-threatening drug reaction to an SSRI (https://www.mayoclinic.org/diseases-conditions/serotonin-syndrome/symptoms-causes/syc-20354758), and also having similar but milder physical reactions to other serotonergic drugs.
Interoception*
I have sometimes suspected I have autism or asberger's, and acid trips have made me feel like I'm cured from it. These days I don't feel as autistic, and this wasn't the result of any one trip, but the psychedelics I tried seem relevant to it.
If you're worried about trying a psychedelic because you're autistic, well, you should be worried because psychedelics are inherently risky. The last time I did LSD I went psychotic, and later found out that the psychosis stuck since I went psychotic again when I went off meds (on the reasonable theory that I would be fine if I didn't do any drugs again. I didn't, and I still lost it).
But, those psychoses were critical parts of my spiritual development. You can read about that here:
https://squarecircle.substack.com/p/to-die-before-you-die
And especially here:
https://squarecircle.substack.com/p/the-real-and-final-enlightenment
So it's a mixed bag.
It's not like it's that common to go psychotic from doing psychedelics, and it didn't happen to me from a single trip. You will probably be fine if you do it once a year, for example.
On your second question, my first thought was "Why would it have anything to do with the gut microbiome?" but my second thought was that it's sufficiently plugged into general metabolism that it would actually be surprising if it did NOT affect the microbiome. So yes, excellent question, never would have thought to ask it.
A quick search turns up one single experimental study on LSD and the gut microbiome, a study in mice: https://bpspubs.onlinelibrary.wiley.com/doi/abs/10.1111/bph.15977. They gave mice LSD for seven days and compared their gut microbiomes with controls, and found that the Shannon alpha diversity decreased (whatever that means) and altered the ratio of a bunch of bacterial species.
Then there's this paper that's more of an editorial, and discusses potential interactions between psychedelics and the microbiota-gut-brain axis. Seems like a fascinating niche. https://www.sciencedirect.com/science/article/pii/S1697260022000576
If low dose LSD can help ADHD, can it help autism?
There is at present TOTAL CRISIS of authority in the media and intellectual world. I ask ACX readers to state what sources of informatiom they actually TRUST, sources that can include opinion and speculation.
Nobody (not myself, at least, that's what this comes down to) will scourge or harangue you for trusting these sources, or at least that is the intention of this.
I trust the following persons/sources to tell the truth:
Mary Pat Campbell: https://marypatcampbell.substack.com/
Glenn Greenwald: https://rumble.com/c/GGreenwald
Scott Alexander is trusted implicitly, mostly because he "shows his work" and is fundamentally committed to the question of "trust". I think that is why we all are here, to be honest.
I have recently talked to co-workers, friends, who believe that Three Mile Island was an epic nuclear disaster, and that Atlantic hurricane data prove that storms are getting more severe and intense b/c climate change. Both those things are not true, and repitition of them are lies. In these two small things, I see people I thought to be rational, refusing to listen to or consider any evidence. My questions about Three Mile Island were met with, "Where did you hear that, ON THE INTERNET?"
Recent news like the Washington University (STL) whistleblower, who published a long essay about what she saw as huge ethical lapses at the clinic for "transgender care", can easily be dismissed because the "news" was only published on "right-wing" sources like the Epoch Times. But this creates a tautology, where partisan organizations deny a voice to stories they ideologically diagree with.
I am anomic, afraid, without constants or norms, and unable to trust anything. What is any way out of this?
>I have recently talked to co-workers, friends, who believe that Three Mile Island was an epic nuclear disaster, and that Atlantic hurricane data prove that storms are getting more severe and intense b/c climate change. Both those things are not true, and repitition of them are lies. ...
My example for this is a brilliant retired professor friend (who is also a lifetime NYT reader) who seems legit convinced climate change induced extreme weather events will be killing significant percentages of the US population soon. Like there will be one super serious tornado, and he will be convinced that there is the super concerning exponential trend. And I will be like "climate doesn't work that way, while there might be changes they have so far been very gradual and project to be gradual", and he will bring out some NYT factoid about how tornados jumped from 976 to 1429 between 2016 and 2017 and OMG by 2040 there will be 8 million tornados.
And then I point out there were 1,126 in 2018, and 1,075 in 2020, and he just seems to think that I am just in denial. The NYT has taught him that we are all going to die, and goddamnit he isn't going to let facts change his view about that anymore.
As for who I "trust"? Scott? Maybe no one...
IDK, I was always a highly skeptical person (was giving the local bishop the inquisition on the problems with Catholicism and Christianity at age ~11), so my trust level was always very low (shitty childhood).
Might be easier to do a time series.
Trust for: (soft science papers, hard science papers, NYT, NPR, FoxNews, random bloggers):
2000: 40%, 95%, 90%, 90%, 3%, 20%
2010: 35%, 85%, 75%, 60%, 3%, 20%
2020: 20%, 60%, 50%, 20%, 5%, 30%
Trust no one.
But I became aware recently to look for 'electric fences', where if you touch 'this' subject, you get shocked.
Electric fences are set up to shield the truth.
Powerful people own the information, and have enough power to set up electric fences around things they don't want examined.
Look at the electric fences: COVID lab leak, Hunter Biden's laptop, climate crisis.
"Air-Horn" is my term for these subjects. Things like computer security, or encryption, immediately run into somebody blowing the AIR-HORN of "child pornography".
Blowing the AIR-HORN obliterates all thought or reason, there is an EMERGENCY, can't you hear the AIR-HORN?
There is no way to talk or reason at that point.
What's the definition of "epic"? Three Mile Island was the fifth worst [nuclear] disaster in history, and the worst in the US.
Nuclear is held up to such high standards compared to every other type of energy generation it's almost comical. I know people are scared of the radiation aspect, but way way more than they should be. Wind Power generation is more deadly to humans than nuclear (https://www.statista.com/statistics/494425/death-rate-worldwide-by-energy-source/#:~:text=Clean%20and%20renewable%20energy%20sources,catastrophes%20like%20Chernobyl%20and%20Fukushima.) Wind, nuclear, and solar are all so low as to be negligible, but we treat nuclear as if it's worse than coal.
If the other forms of power generation were as safe (in practice!) as nuclear has been, we'd be hearing all about how wonderful it is from a safety prospective.
I remember TMI well. Even to use the word "disaster" when referring to it is to abuse English to the point that one has to suspect dishonesty.
5th worst "nuclear" disaster. You accidentally a word. As energy-related disasters in general go, it is not in the top 1000.
Exactly this. For many people, millions, including my friend, the idea that "Three Mile Island" was "terrible" is embedded in their minds at a deep level, and not even handing them a printed copy of the "REPORT to the US SENATE on the TMI Accident" could dislodge that.
We are also at a crossroads where even printed documentation is suspect. My friend's dismissal was virulent: "You read something ON THE INTERNET?" -- I can't convey adequately the level of contempt that he (a professional actor, and voice-over artist, and very, very good) charged that with. And now we see AI's able to generate documents that seem exactly like the most persuasive and certified reports of our society.
If a document printed on paper in 1980, complete with all the artifacts like pop fonts, tables and charts made with a typewriter, and with physical signatures and stamps from the National Archives, retrieved from a US Govt source, if you are not even willing to look at that, what defense have we against the million-lie astroturfing of tomorrow?
Oh, nobody will even entertain or look at any thing that might gainsay their previously ordained certainty. It is sad, many will not risk shattering the comforting worldview -- a worldview and action-sphere that distills down to doing exactly what their (left-wing, Berkely/Cambridge Green Party granola-cruncher) mommy tells them.
Generally speaking, my heuristics are vaguely like :
1- Claims with no obvious emotional (political, religious, etc...) or profit antecedents or consequences are trustworthy.
2- All public statements a politician makes are carefully-crafted and hyper-optimized to do one thing and one thing only : make them look good and increase their chances of success in the political system they are operating in.
3- All public statements a corporation and\or its representatives make are carefully-crafted and hyper-optimized to do one thing and one thing only : make it look good and increase their chances of profiting in the market they are operating in.
4- Everything happening in a place you don't understand the language of might as well be fiction, trust no translation. Your degree of distrust should vary with how concrete and close-to-the-facts the claimed event is : A car crash needs no words to convey, but a complex political drama is ripe for (deliberate) mistranslation and missing context.
5- Generalities are less trustworthy than specifics even if supported with specifics, much less so if not supported with specifics.
6- "Experts" are vastly overrated the vast majority of time. To a very good first approximation, an Expert is a physicist or an engineer or a mathematician or a computer scientists or a historian talking only about the subfield they got a phd in.
7- Imagine that the writer/speaker of every critical piece of media is your sworn enemy and that its target is you, how would that affect your interpretation of it ? What is an alternative phrasing that fits all the facts yet doesn't paint you as bad ? If the piece of media is praising instead of critical, imagine that you hate the thing/person it praises with the passion of a thousand sun, and the writer/speaker of the praising as a hired mouth that you saw pocketing money with your very eyes.
8- As much as it's fun and fair to taunt people you hate by repeating unflattering falsehoods about them, your mind is a sacred place that you should not let falsehoods creep into, contemplating unflattering-to-you truths or enemy-flattering truths privately will not boost the enemy, and will increase your understanding of both you and your enemy.
9- On the subject of enemies, while everything they say must be necessarily discounted by the fact that they will lie and mutilate truth as necessary to paint the something or the someone they hate in bad light, give special private consideration to their criticism if you care about its truth value. Enemies are good at finding inconsistencies and wrinkles in the things they are enemies of, and naturally incentivized for such. The inverse also holds for friends and admirers.
10- Vast majority of news is useless non-actionable negative entertainment. Information exist mainly to fulfill some utility : to base actions on (action optimization), to base views on (value optimization), to increase your intelligence or your mental toolkits of models and ways of thinking (performance optimization), or simply as idle curiosity (exploration), building a model of the environment simply to keep your brain sharp.
The vast vast majority of news fulfill none of these things : you can't base any actions on it because it's simply none of your business ("Putin Invaded Ukraine"), you can't base values on it because it's either too easy or too hard to infer what value you should take out of the whole situation so it adds no useful values (1-"Putin Invaded Ukraine ! Ohh gee, I guess killing people is bad" ; 2-"An actress I don't know claims her boss harrassed her ! Ohh gee, I guess I will just believe this random woman with obvious incentives to lie in a profession notorious for promiscuity and take her words over the random guy that I also don't know and haven't even heard his defense"), they don't increase your intelligence or point you towards any new mental models since they are negative-emotion-heavy, yet forgettable, thrillers with most of the facts bungled, blinded, and reworked to get maximum oomph, and they are not even good topics for exploration, as they mostly induce negative emotions when they're not boring. I can find no worse works of semi-fiction.
Very good list!
> 6- "Experts" are vastly overrated the vast majority of time.
Doubly so if they're being interviewed in some kind of mainstream venue/channel/publication, because then they were likely selected to convey legitimacy to a specific viewpoint. It's very, very uncommon for a mainstream source to feature two experts that disagree and provide multiple perspectives.
> Enemies are good at finding inconsistencies and wrinkles in the things they are enemies of, and naturally incentivized for such. The inverse also holds for friends and admirers.
This has actually been studied a few times under "my side bias". Totally a thing.
Any statement that mixes units in comparison (Event A has increased by [absolute number] but Thing B has increased by [percentage change]) is a lie.
This is outstanding. Should be broadcast all about. Great reasoning and thinking, exactly what I hoped for by asking these questions. Well said, Sir.
Thank ! very kind words that made my day.
Thank you for a very substantive comment. There's so much serious thought here I have to chew on it for a while.
BR
Woody Harrelson did nothing wrong.
I agree, we're going through an awkward period when the credibility of corporate media is crashing, yet there's nothing to replace it yet as a standard of credibility.
I'm a big fan of https://treeofwoe.substack.com and https://www.piratewires.com and also https://spiritualsoap.substack.com/
Jesse Singal and Katie Herzog of the Blocked and Reported podcast. But it has a fairly narrow focus (mainly culture war stuff) so may not be what you are after. I respect their commitment to being able to back up what they say and to correct what they get wrong. What is most worrying to me is that some of the people who attack them are perfectly respectable intellectuals who just seem to have lost their moorings.
Russell, I agree I find them relatively trustworthy for journalists. Also I promised you a podcast comment, and I have a draft of one in my email, but it keeps growing and growing because I have listened to about 30 of them now.
So the TLDR of the as yet unsent comment is I am really enjoying the podcast and think your choices of topics/guests are interesting, your voice pleasant, and interview style maybe not an "A", but a solid "B" that isn't taking anything away from the podcast. You let the guests have a lot of rope to speak, which is great. The audio quality is good enough to not be noticeable one way or the other (which is all I am looking for there).
I have a long flight coming up, so I will try and send you a proper review during that.
Thank you for doing this. I’m looking forward to the detail but this is really encouraging.
I agree on Jesse Singal et al.
TL;DR up front: paid-for industry sources are the best for obvious reasons. In the UK, the most consistently-good free source is the BBC (written news, not TV/Radio)
Explanation:
From ~2012-2016 I worked in various editorial roles for a large legal publishing firm. For a good part of that period I was on the 'news analysis' team for our clients (lawyers & those in that space). My job was to wake up very early and read every broadsheet newspaper as fast as I could, along with a feed of articles from AP, Reuters, BBC, Bloomberg and a few others. My team and I then selected articles we thought would be of interest to our clients and wrote summaries of them. This meant that for several years I effectively read the vast majority of the "serious" news output in the UK every single day. There were lots of interesting things about this but the one that's relevant here is that I would read *the same story* reported many times in different papers, and so built up what I think is a fairly good view of the biases & failings of most of these publications. Because I was also reading a lot of industry publications that followed specific stories in more detail than the general prcess, and very often writing up the relevant court reports, I also developed a decent view of which publications tended to make mistakes (or tell lies if you're uncharitable) which were not later corrected.
I think it goes without saying that there is no source on which you can rely to be 100% *correct* - but that's not what you're looking for; you're looking for someone to be *honest*. The problem is that as Scott himself recently argued, the MSM is rarely in the business of telling outright lies; instead they tell *technically the truth* in very misleading ways. In addition to applying all the usual martial arts of rationality, it also helps a lot to be aware of the existing biases of each publication.
Unsurprisingly the best and most honest reporting I saw was the most expensive stuff - the reports written by specialist reporters for Montel, Bloomberg, Reuters etc and for which industry clients (particularly traders) paid a lot of money to access in realtime. A surprising amount of this never makes it into the mainstream press; not because of any conspiracy to suppress the truth afaik but rather because mainstream press (accurately, I assume) thinks no-one will care about it.
For a free, generalist source, the best *by far* in terms of honest reporting in the UK I found to be the BBC news website; a lot was covered and was generally well-handled. The downside to this (and the way this was achieved) is that the BBC tended not to report anything it was not very sure to be true and that it had corroborated independently. This meant that they were often slower to report than other outlets, and reported less detail (particularly when it came to personal or prurient details). Their main bias was towards *authority*, which makes sense as they seem themselves as one. There were also individual topcis on which individual editors and staff at the BBC had strong personal opinions which they failed to conceal (example: during the period mentioned above they were extremely pro assisted dying and would report in great and sympathetic detail on a vast number of cases which were completely ignored by the rest of the media; almost all these articles came from a single journalist. I do not know if this is still the case as I no longer read the news as thoroughly). Most of the time, however, they take fewer shots and get more of them right.
If you can find free Reuters or Bloomberg coverage on the subject you care about it will normally be mostly correct. The downside here is that the best stuff is not free, and relatively few subjects are covered.
You should strongly avoid almost any broadsheet, even 'reputable' ones such as the FT or the Times. They are right just enough to trick you into believing Gell-Mann amnesia shouldn't apply to them, but it still does.
I realise that none of the above is a staggering, new insight. Sometimes the truth is obvious!
> In the UK, the most consistently-good free source is the BBC
The big problem with BBC news, both TV and online, is that it is highly selective. Anything contrary to the liberal world view they wish to project is omitted, or downplayed by being hidden away in a subsection of the website, or left prominent for a shorter time before being whisked into obscurity.
Of course, someone liberal themselves might not notice this much, or may even welcome it, but in relying solely on BBC news they would be in something of a filter bubble all the same.
The BBC have a strong pro-immigration and pro-diversity bias, and will do their utmost to ignore or suppress anything which puts immigrants or minorities in a bad light, or suggests that a continuing liberal immigration policy is not universally popular.
For example, I don't think they reported at all on the large anti-immigrant demos in Dublin and elsewhere in Europe a couple of weeks ago. I also very much doubt they have ever reported, nor ever will if they have their way, on the adverse impact of mass immigration into Sweden for example.
Also, they are clearly biased in favour of Labour (roughly equivalent to US Democrat). If a Tory ("Republican") politician is involved in a scandal, the BBC headline will typically read "Tory politician caught in bed with underage boy" (or whatever the misdeed was), whereas the equivalent Labour headline would read "Politician caught .." and only tucked away in the article, if anywhere, will their party affiliation be mentioned!
> Of course, someone liberal themselves might not notice this much, or may even welcome it, but in relying solely on BBC news they would be in something of a filter bubble all the same.
of course, that's a fully general argument. Right wingers will not notice the bias in right wing oulets, etc, etc.
Since the Financial Times was sold to Nikkei in 2015, they seem me to have moved to occupy the position you describe the BBC occupying at that time, which in contrast seems to have become more of a mouthpiece of the UK government. The BBC's notions of authority also seem to have become somewhat tainted by tribalism. Unfortunately the FT is neither free nor covers as wide a range of topics as the BBC, so it's closer to Reuters/Bloomberg than a general source.
This is good stuff, thanks
The BBC protected pedophiles. Defund the BBC. Instructions to defund the BBC:
1. Follow this link to declare you no longer need a license. https://www.tvlicensing.co.uk/cs/no-licence-needed/about.app
2. Cancel your direct debit.
3. Defund the BBC. Don't fund pedophiles. Post a reply if you defunded the BBC and please encourage others to do so by spreading these instructions.
Your comment was very informative, thank you!
If you treat this as a binary question of who to trust or not trust, you've already failed, as you are just putting yourself in a different epistemic bubble. In particular you should be way of anyone who's main way of marketing themselves is to say the "mainstream media" are bad and untrustworthy, and therefore you should trust them uncritically.
The boring answer is that all media has some degree of unreliability, whether due to incompetence or deliberate bias. In general the larger more mainstream sources tend to be more reliable, taken on net, because they have more to lose from being wrong. Though what facts one chooses to highlight is itself a conscious choice that can inject bias.
>In general the larger more mainstream sources tend to be more reliable, taken on net, because they have more to lose from being wrong.
This only holds if there are independent authorities with enough clout to be able to prove them wrong. Which does not apply to fundamental culture war narratives, as an important example. Sadly, there's no good reason to expect anybody else to be much more correct on net on those matters either.
re: Randomstringofcharacters's comment (in case Substack borks the reply function again) -- bravo to the first part of this. Binary trust/don't-trust completely misses the point of what interacting with information can be like for the open-minded.
However, I'd say that the question of reliability is separate (as in Scott's posts about the media not actually lying much) from the question of 'being wrong'. The media is likely a tremendous net negative in terms of helping people correctly model reality, and I'd call that deeply unreliable. They're not often wrong on the facts, though.
" The media is likely a tremendous net negative in terms of helping people correctly model reality, and I'd call that deeply unreliable. They're not often wrong on the facts, though."
^This.
The operative principle of the media is to push a particular preferred narrative. Sometimes this is done explicitly, like at the NYT or NPR, but can be seen when J-schools talk about "changing the world" or "providing the information necessary to make the right decisions/Democracy(tm)." The object is to influence what people believe and then presumably how they act. To do this it's unnecessary to fabricate things, one can simply curate which facts will be presented and how.
I generally find The Dispatch to be a decent source of right-of-center news
The news itself is right of centre?
Presumably news that supports right of center positions.
It might actually be. If you identify the left as more optimistic about human schemes and efforts, and the (traditional, conservative) right as more pessimistic, and also account for the fact that the center probably trends at least somewhat more optimistic than reality supports, which seems like plain old human nature (cf. the oft-told tale that the majority of us consider ourselves above-average drivers, most retirees overestimate how long they'll live and be in good health, business plans rarely come in under budget and ahead of schedule, et cetera) -- then any steady stream of factual reporting on the results of human efforts and plans will be a little surprisingly disappointing to the center, and only match expectations of those a bit more pessimistic than the center.
Hah, poorly phrased indeed
I'm not sure that I trust anyone to be 100% accurate 100% of the time. In general I rate what people tell me based on a number of criteria:
- does this person have an incentive to say this, regardless of its truth -> downgrade
- does this person show their work, cite sources, etc. -> upgrade
- does this person show evidence of training in the field, including the ability to communicate in an appropriate register -> upgrade
- is this information plausible, based on other information
- can I test it? what happens if I do?
- does it generate testable predictions? Are people claiming to have tested them? what results do they report?
- does it matter? If you tell me you had spaghetti for lunch, I'm not going to spend time thinking about whether you are telling the truth.
- has this person been reliable in the past? if so, was it in a relevant area? (The person who gives me the best recipes for ingredients I don't know how to cook with gets a big upgrade for recipes; almost none for details of non-culinary history)
You get the picture. Except in practice I don't bother unless there's something new involved. The replication crisis caused me to revise a lot of my prior beliefs towards greater uncertainty. So did discovering that in the 30+ years since I'd been in college, biology had advanced in ways that threw much of what I'd been taught into question. So had paleontology. I treated both as pretty close to "relearn from scratch". OTOH, I rejected several new claims about history, feeling that elderly eye witnesses I'd known in my childhood were more reliable than seemingly politically motivated claims from people too young to have ever met any eyewitnesses.
Boy, the Greenwald link seems pretty Fox Newsy to me. “Biden Leaves Ohio to Die”?
Greenwald, himself, broke the Edward Snowden story, and also the reporting on the Hunter Biden laptop. In both cases, he was stiffarmed by traditional media. His whistleblower bona-fides are pretty solid, to my thinking.
Didn't Greenwald break the Epstein story too?
Seymour Hersh broke the My Lai massacre and the bombing of Cambodia, but he's spent the past decade or so peddling nonsense. It is a failure of logic to assume that because a person was conspicuously honest and diligent once, they will always be honest and diligent.
With investigative journalists, there is a failure mode where their early high-profile stories put them in a place where they will always and only have an audience for stories catering to one side of the political divide, because one side sees them as a hero and the other as a traitor or a fraud. Too often - not always, but often - they fall to the temptation to turn whatever bits of fact come their way into the stories their audience wants, without due diligence. Because their audience will believe them anyway, and the other side never will no matter how diligent they are. Greenwald hasn't gone as far down that path as Hersh, but there are enough warning signs that I do not extend him a high level of trust.
Hersch has had more decades in which to lose his mind than Greenwald has.
This comment and your original one sound to me like you're trying to stick to too much of a black and white model of trust. Everyone is wrong all the time, even ourselves. So I think a trust/no trust model doesn't work very well. Like Scott has written about, when Fox News or The New York Times (pick whichever one you trust the least) writes that there was an earthquake, you can pretty much trust that there was an earthquake. After that, it gets harder. Not only does a binary model not work well, it often ends up leading to a liars/truth tellers, in group/out group dynamic. People you know who are wrong about things, even denying evidence to the contrary, are probably not liars as such, they are just misinformed and sticking to their guns. Everyone is prone to it.
The Hunter Biden story was repeated ad nauseum all over conservative media. Fox didn't shut up about it for a year. In what sense was he stiffarmed? Even mainstream center left papers like the NYT were reporting on it from fairly early on, in the context of "this is something that is being alleged"
The mainstream was reporting that 50 CIA agents signed a letter stating the Hunter Biden's laptop story bears all the earmarks of a Russian misinformation campaign.
I also need to mention that Greenwald was "stiffarmed" by The Intercept, the newspaper/publisher he FOUNDED, who refused to publish the Hunter Biden story at the time.
I know this seems like I am GG himself, but the facts here support his credentials as a journalist reporting truth that corporate media have decided to suppress.
IIRC, even Fox News - who a few months later would be pumping up various conspiracy theories about the 2020 election that their own anchors were privately saying were totally bonkers - declined to break the Hunter Biden story. It simply wasn't well corroborated.
It's not like that sort of thing is unprecedented. The infamous "Steele dossier" wasn't reported on by any major outlet until after the 2016 election, and when it was, a bunch of media outlets came out and said some version of "we were shopped this story but declined to run it because we couldn't confirm its reliability". It was Buzzfeed that actually published its contents.
Were the Snowden leaks "well corroborated"? No seriously major political scandal I can remember was ever well corroborated from the outset. Publishing the story is what opens the floodgates to further leaks and corroboration.
We disagree. The Hunter Biden story was suppressed, censored, memory-holed, deliberately hidden, exactly at the time of the US Presidential election. Any subsequent reporting on that story happened after the crucial time of Operational Effect. All US media conspired to dismiss and hide the story, and to actively discredit and slander any source that reported it.
This is exactly what I am talking about re: Authority, crisis therof
You don’t think Biden left Ohio to die is kind of a Fox News headline?
Given that the train derailment was not actually dangerous, and anybody who believes that, learned their chemistry from Spider-Man, or Jane Fonda, yes, the headline was disengenuous. I am not saying Greenwald is an Avatar of Truth, I guess I am saying I don't immediately assume he is lying.
I haven't looked into the chemistry of what was on that train.
What I do think, though, is that if the response to a train derailment in Ohio needs the intervention of the President of the United States then the USA needs to be rethinking its approach to federalism. The US has eighteen million government employees, it should be capable of dealing with stuff like this without the need for the old dude at the fancy desk.
It's funny because I don't trust Glenn Greenwald to tell the truth at all, he strikes me as very dishonest. If you ask me for examples I can't give you a definitive list or anything because I don't keep track, nor do I regularly follow him. Oftentimes the dishonesty strikes me as "technically true but highly misleading", other times outright false, and sometimes (I'm not sure if this counts as dishonesty or not) he's just really inconsistent on certain questions where he acts like he has really solid views (I know that's very vague but not sure how else to describe it).
I don't know anything about Campbell, Scott I trust to be honest but not necessarily to be correct, and the Washington U whistleblower article I read and don't dismiss as right-wing whatever.
I recently came across the followinging, which I think shows that Greenwald is untrustworthy:
https://rumble.com/v1zeexw-the-elitist-snide-worldview-of-yoel-rothtwitters-former-censor-in-chief-wit.html
At the 12:47 mark he plays a clip from an interview of Yoel Roth, the former head of trust and safety at Twitter. His commentary on the clip starts at 14:24. He claims, with no evidence, that the threats made against Roth were merely criticisms of Roth's work at Twitter. He claims that Roth views himself as a victim of “people, the sort of unwashed masses, the ones who have no power or position relative to [Roth],” even though Roth doesn't describe himself as a victim and the only people he mentions are Kelly Ann Conway and Donald Trump.
Greenwald says that, “I think probably the intonations of how [Roth] speaks in that clip are one of the most important things.” He introduces the clip by saying “at the risk of nauseating the audience....”
Greenwald is really an enigma to me, but whatever is going on with him, he's not acting like a trustworthy journalist.
Thanks for calling out this clip and describing it in an interesting-enough way that I actually tapped it and jumped to 13+ minutes. It's odd, though -- I'd heard of Yoel Roth whenever the twitter stuff was happening, but hadn't bothered to watch any of it. Even the 20 seconds of the clip I could stand filled me with such a revulsion for this cringing, supercilicious creature that although I still don't know who Gleen Greenwald is I now like him twice as much as I did five minutes ago. I'm a bit shocked, insofar as generally I regard journalists as the very lowest category of human, and siding with one makes me uncomfortable.
> Oftentimes the dishonesty strikes me as "technically true but highly misleading", other times outright false
So like pretty much every other journalist then? The only questions then are 1) how often their take is wrong or how misleading, and 2) is their perspective one that is being suppressed or isn't being discussed in the mainstream? If so, then that by itself has value.
Maybe someone in the comment section who will fess up to liking Nassim Nicholas Taleb can chime in on why he calls Greenwald tremendously dishonest (like Snowden).
I don't know what Taleb's beef with Greenwald is, but he has a history of first praising people to high heaven and then, when they fail to live up to his somewhat arbitrary expectations, condemning them as worthless scoundrels. Something like that may be in play here.
Greenwald is in the category of internet contrarian who treats uncritically repeating the positions of people like Putin as being a demonstration of independence because its in opposition to the American mainstream. A stopped clock is sometimes right, but reversed stupidity is not intelligence
Calling him a contrarian implies that his opinions don't derive from a consistent set of values. That's not been my experience when I've seen his stories. He seems staunchly anti-imperialist, anti-war, and anti-authoritarian. The US is all pro-all of those things, so of course he would find himself regularly opposing US policies.
I suspect any pro-Putin sentiment you're inferring is you swallowing the mainstream false dichotomy that disputing US actions is necessarily indication of support for Putin.
Interesting. Greenwald seems honest to me, at least I am willing to view him as having no overriding ideological bias or being the tool of a govenment.
Of course he may be mistaken, or too small to appreciate the larger game that is being conducted around him. If applying David Foster Wallace's criteria of the "intellectual appeal" vs "emotional appeal" vs "technical (ie expert) appeal", he is at least capable of not insulting or blatantly lying to me, a combination of 1 and 3 that achieves the purposes of (2)
Greenwald suffers, IMHO, from being rather alone in his journalistic quest (which makes me trust him rather more, actually, supra 2), and I have to be blunt, where some have a "face for radio", GG has a "Voice for print" -- his on-air persona is unfortunately whiny and not-compelling, to say it mildly.
Yet I trust him. But trust is an incredibly low bar now, as I hope I have intimated above.
BR
He's much like Curtis Yarvin, who is a great writer but really needs to take some speech courses if he's going to continue to appear on podcasts.
To inject a bit of levity, Linda Cohn, longtime ESPN reporter who was in line to become the first female anchor on "SportsCenter" -- at the time a huge and visible mega-media position -- was ordered by ESPN brass to take six months' of speech therapy because .... well, her intense Long Island accent simply wouldn't fly. She agreed because, well, yeah, Lin-DER! :-)
I think he does have an ideological bias (almost everyone does), it's just hard to describe exactly because it doesn't fit in exactly with either the left or the right, and frankly because I think he's not super ideologically consistent, more driven by hair trigger opposition to the people he doesn't like (these days usually liberals).
TBH I don't even give him this much credit. Main reason is all the times that he says things where he's clearly trying to be technically true while also being clearly misleading. Couple of examples come to mind (these are all from memory, I don't have the links):
- claimed he never supported the war in Iraq, called someone a liar for saying he did. When pressed (Greenwald wrote a book in which he admitted he supported the war at the time) his reasoning was that he wasn't a public figure and so never publicly advocated for the war ... which of course isn't anyone's definition of "supported". IIRC calling the person a liar was on TV, the explanation was on twitter.
- when talking about Assange he repeatedly tried to give the impression that it's liberals who were responsible for prosecuting him when it was the trump administration (the Obama administration refused to do it). Similarly trying to claim that no liberal media figures had criticized this. Again always said in some weaselly way to be technically true but misleading.
- in an effort to claim that Republicans are the real party of the working class, wrote something that tried to attribute larger stimulus checks to Republicans, when the vote broke down almost 100% along party lines
- there was some committee vote that went against trump's plan in Afghanistan that he repeatedly attributed to "Liz Cheney and pro-war Democrats" when like 80% of the relevant committee voted for it including the majority of both parties.
- called someone a coward for deleting old tweets on at least one occasion but often deletes his own old tweets (this is less a lie than just being a jerk tbh).
There are plenty of people who get out over their skis in reporting something where they believe what they're saying ... but the above are all premeditated attempts to mislead people.
> claimed he never supported the war in Iraq, called someone a liar for saying he did
I'm not familiar with the circumstances here, but "supported" does in fact have many possible interpretations, so I can easily see confusion develop around this question.
"Publicly expressed support", as in, "an endorsement" is one interpretation. This happens a lot with candidates running for office. You can even endorse someone without fully supporting their policies. From your description it sounds like Greenwald did not publicly endorse the war.
"Privately agreed with but didn't publicly express support" is another interpretation. Many voters and non-voters fall into this camp, as in they don't go around loudly shouting their policy preferences but obviously they agree/disagree with various positions. From your description it sounds like Greenwald did privately support the war.
"Materially supported" is another interpretation, which goes one step beyond just public support. For instance, are people who send "thoughts and prayers" to victims of a tragedy really supporting them?
I think there's a lot more nuance here than you're allowing.
> when talking about Assange he repeatedly tried to give the impression that it's liberals who were responsible for prosecuting him when it was the trump administration (the Obama administration refused to do it)
"Liberals" are responsible for prosecuting him *right now*, so they are the people he's trying to shame. Who started it isn't really relevant is it?
I have no particular opinion on the rest so I won't bother looking up the data to confirm and reply.
> I think there's a lot more nuance here than you're allowing.
I don't think there is, but even if there is and I'm wrong about that, he's still wrong to call people liars over it.
> "Liberals" are responsible for prosecuting him *right now*, so they are the people he's trying to shame. Who started it isn't really relevant is it?
IIRC he was blaming liberals for it even during the trump administration! Plus ... your reasoning doesn't excuse the misleading statements he has made where he tries to make it sound like liberals are entirely responsible for the situation.
Maybe I’m being naïve, but I like to thing the PBS News Hour is pretty even handed.
I think they're trying to be, but are held back by the fact that they simply don't have equal representation of right-wingers and left-wingers on their staff.
Reuters and The Economist are the most informative and least biased I've found, though they're not perfect and you still must use your judgement when reading them.
Same here.
I also find the Economist's analytical writing to be quite good. Which is not to say that I always end up agreeing with their take, at all; but they are clear and generally level-headed about it. I have to put some thought into why I disagree when I do.
Also the dry English wit that sneaks through now and then makes me LOL.
The stuff that they might as well not even publish in my experience is their annual predictive-punditry special issue and their quarterly "1843" lifestyles, I guess?, thing.
Reuters and the BBC were paid to do government propaganda.
https://thegrayzone.com/2021/02/20/reuters-bbc-uk-foreign-office-russian-media/
https://www.youtube.com/watch?v=eJ8mycEVgGQ
>The leaked materials show the Thomson Reuters Foundation and BBC Media Action participating in a covert information warfare campaign aimed at countering Russia. Working through a shadowy department within the UK FCO known as the Counter Disinformation & Media Development (CDMD), the media organizations operated alongside a collection of intelligence contractors in a secret entity known simply as “the Consortium.”
I most definitely don't have an opinion on the matter and could go either way, but you're saying "Reuters and the BBC were paid to do government propaganda" as if it was an established fact, which means that you're treating the Gray Zone and YouTube/Jimmy Dore as reliable. How did you determine that? Saying that a source is unreliable by relying on another source is a somewhat circular/unsolvable conundrum (I'm not saying I have an easy solution to that problem either...).
Yes I got a good LOL out of a conspiracy accusation being backed up by...those links.
+1. I often disagree with the analyses and interpretations of The Economist but I feel pretty confident that something happened if they say it did, and that they'll cover most major international events. News agencies like Reuters, AP, AFP, have a business model predicated on reporting facts, so I tend to trust them as well.
I would also rely on triangulation between conflicting sources. If CNN and Fox agree that something happened, that's pretty solid ground IMHO.
The Economist probably is the best "mainstream" publication. Back when, they ran a cover article on "The Art of the Lie", about how crazy "alternate facts" spread around on social media and partisan news sites. One of the examples of this sort of obvious falsehood which wouldn't go away, was Breitbart, et. al. pushing the idea that Hillary Clinton had some sort of undisclosed medical issue. I mean, come on.
They published on September 10th. The next day, of course, Hillary collapsed in public, due to an undisclosed medical condition.
So that's part of the problem: the Overton window, at least for the mainstream media, isn't wide enough to encompass all of the actual reality around us. The idea that Hillary had some sort of potentially-disqualifying condition, even temporarily did, was totally unacceptable; and so they didn't accept it. (Even afterwards, the media didn't vigorously follow up the remaining questions about mis-aligned eyes, long disappearances from public view, etc. the way they dog-piled McCain, say.) But reality is nothing more nor less than that-which-we-must-accept-regardless, isn't it?
The hilarious thing about that one was the total denial on very mainstream high quality podcasts about her "collapse". Some very serious very "just the facts" people insisting there was literally zero mentionable/noticeable about that video. To the extent it made you wonder if they had watched it. But they clearly had.
Was yet another eye opening moment, and I had thought my eyes were already super wide.
Since I don’t think I’ve self promoted in a bit and literally just answered a quest on my recollection about self promotion I have a whole substack about this, Extelligence.
The top post is the relevant part. Basically, Bird Watch for the entire internet.
Aljazeera is surprisingly impartial on most topics that are not Israel-related.
Edit: they are also not soft or hard paywalled, unlike Reuters or The Economist.
I agree.
I want you to unsubscribe me. I have requested this multiple times!
I've removed you since you don't seem to be able to do it yourself, but in the future I'd like people with this problem to do it themselves via https://support.substack.com/hc/en-us/articles/360059788812-How-do-I-unsubscribe-from-a-free-subscription-
Scott recently reviewed Malleus Maleficarum. If you enjoyed the review you might like a podcast I did very recently with Stacy Schiff on the Salem trials. I guess what I particularly took away from what she said was how important due process is. Well that and not relying on spectral evidence that nobody in the court can see except the 'bewitched'. In fact if I were on a jury today I am not sure I'd convict anyone - even confession evidence seems hopeless. Lots of the accused confessed - maybe to save themselves and maybe even because they began to think they might be guilty. In Stacy's book she has a reference to a two part article in the New Yorker called Remembering Satan. This is a recent case and is absolutely terrifying. Here are links to the article and the podcast.
https://www.newyorker.com/magazine/1993/05/17/remembering-satan-part-i
https://www.newyorker.com/magazine/1993/05/24/remembering-satan-part-ii
https://pod.link/1436447503/episode/31c2ef6b5fa117cd4e5b3939f3591d0d
I'm new to open threads. To what degree is posting topics in your own blog posts ok?
its fine if you aren't doing it all the time/every thread. The more relevant the post is to the topics of ACX, the less it will be viewed as spam.
I think he said twice a year of if it’s directly relevant but try not to overdo it.
An experiment: promote your Substack on my guestbook. https://www.newslettr.com/p/guestbook-promote-your-substack/comments
If people are going to post questions like "is it OK to post a Wordpress blog" here rather than on the linked post, the experiment was clearly a failure.
Does it have to be Substack? I have a Wordpress blog, although I have shamefully neglected it for the last year or two!
Thanks; I took you up on that offer!
Get a ‘th’ Evan! /jk
Because everything is recursive, I have created my first manifold markets market to help figure out how much funding y'all think the mini-grants will get: https://manifold.markets/JackRabuck/how-much-funding-will-the-acx-minig?r=SmFja1JhYnVjaw
Was curious after the Geography of Madness review: does the social contagion version of a disease (eg delusional parasitosis) improve if the patient changes social settings (eg moves out of the house where one roommate is spreading the delusion)? Or once your prior is trapped, does the original social cause not matter anymore? Any research on this?
It’s interesting seeing AI hit the mainstream, and I’m curious if the anticipation of AGI is going to begin affecting career planning, retirement saving, etc in a way that actually moves markets
I’m trying to figure out where to invest for the compute infrastructure. I think the chips are going to be a bottleneck but haven’t done any math on it yet.
This will be really interesting to see. Have been considering changing my pension contribution down a little bit. I worry greatly about the threat of AGI but clearly not enough to change my own spending habits too greatly. At what point do we start yoloing in a panic
Cycling websites propose cities institute "green waves" - https://www.centreforpublicimpact.org/case-study/green-waves-bicycles-copenhagen - patterns of traffic lights such that cyclists traveling at a fixed speed always hit all green lights (they might coincidentally get to the first light when it's red, but then they'll wait until it's green, and then after that they'll be "in sync" and always hit all green lights).
Why can't cities do this with cars traveling the speed limit?
They do (or at least did.)
They do this on down town streets in Buffalo during rush hour.
This was once common in small towns in the midwest US. The speed for which the lights were synchronized was posted on a sign as you entered town. It may still be common practice, I haven't driven through the midwest in many years.
They do! They've had those since the 80s at least. https://wsdot.wa.gov/travel/operations-services/traffic-signals
> When multiple traffic signals are close together on a highway, they are normally interconnected and are often operated as a coordinated system. Coordinated Signal Systems are probably the most misunderstood signal system networks. A coordinated signal system network does not mean that a driver on the main roadway will get green lights all the way through the network. But the system is set up to get drivers through the entire network as quickly as possible.
> There are times and locations where this can mean sequential green lights through the whole system, but that is not always the case. Traffic volumes, distances between traffic signals, sizes of intersections, pedestrian activity, and other factors may not allow for truly sequential green lights.
> Many coordinated signal systems are not operated in coordination all the time, but instead only during peak commute hours. This is why networked systems may operate differently depending on the time of day, which may mean more or fewer stops for users.
> Coordinated systems can also cause more headaches for side road users. Coordinated signal timing is typically based on the largest and most complex intersection in the network. This can result in longer than needed times at smaller intersections. If you find yourself waiting at a side street with little or no conflicting traffic, and it's in a coordinated system, this may be why. However, as with general cycle length, even coordinated systems are set up so that no system in the network has a cycle length greater than 240 seconds, and are as short as possible.
Sarasota, Florida did that for a major traffic artery. Didn't realize it was unusual. Could it be that lots of places do this already and you don't know about it? You can't do it for every street, of course, because adjusting the lights of road A so that someone going the speed limit gets all greens is not necessarily compatible with people on roads intersecting road A also getting all greens. Though I'm sure a computer could set up the lights to maximize the number of driver minutes people spend on all-green routes.
About 10 years ago, Los Angeles finished a major project to get all the lights on a central adaptive timing system so the timing can be adjusted on the fly according to demand. They claimed the average travel time during rush hour fell 20%. Of course for some people their travel got slower (or at least their perceive travel time got slower).
In DC, where i have the mosts experience commuting, the main avenues traveling into the city from the MD suburbs are mostly all timed to give green waves at just below the speed limit. The timing swaps for the afternoon rush hour.
What if the cyclists are not travelling at fixed speeds? All kinds of people cycle, from school children to older people. Not everyone is 30 years old in semi-professional kit cycling at a rate of twenty mph while timing themselves so they get a particular set of exercise in while cycling to work.
So I can see slower/faster cyclists not being in sync with the 'wave' and hitting red lights anyway, while car etc. drivers will just be pissed-off at the holdups.
In portland you get get a green wave by traveling at 12 mph which is pretty slow on a bike.
And yes faster cyclist miss the wave. The point isn't to allow everyone to travel as fast as they can, its to improve the total capacity of the traffic network. Most cars at rush hour are stuck in traffic and stop at every light anyway, this doesn't make that problem worse.
The article says it's set to 20 km/h.
My city has hills, and traffic lights optimized for cars. Trying to hit all the greens on a bike is a terrific trial, especially on some unhill stretches.
The idea is that the lights are timed differently than they normally are so you can have a green wave on a bike. In portland this speed is 12 mph which is a very reasonable speed. The green wave benefits cities with hills because if you get on one you are unlikely to have to stop mid-hill.
I would imagine that "cycling activists" they'd have them optimised for a man in lycra who cycles every day on a $8000 bicycle because screw everyone else, they're doing cycling wrong.
Your comment and trebuchet's don't reflect the current culture around cycling and walkability activism. It is not dominated by Lycra wearing weekend warriors or people who think they can do all their errands all the time on a bike.
The vast majority of people activating for cycling in cities are looking to make their commutes safer and lower the barriers to new people cycling on their commute. The mode policy requests are usually very reasonable and have been shown to have little to no impact on car travel times.
Relevant webcomic: https://web.archive.org/web/20181014180825/https://pbs.twimg.com/media/DpPzKuyU4AAdFzl.jpg
They can. The one-ways in my city are timed so that you can drive without stopping as long as you go ~5 under the speed limit. It's a huge pain when one of the lights gets out of sync, though.
I suppose it’s hard to optimize for car and bicycle traffic. Plus Denmark has few hills so you don’t have to account for climbing and descending bikes.
Weird, I did some work on this in college and someone reach out and asked for a meeting earlier today. Glitch in the matrix I guess.
So: they can and do. Haven't you noticed while walking that you often see the next green light even when the full stop might be for a few minutes? This is because you are moving more slowly than the system is anticipating. If you were moving at full speed you'd hit a green wave. (There will never be perfect green waves since waves can interfere with each other. But certainly it can be much better than a random alternative.) Traffic management is done through a series of green waves that flow over the city down street systems.
The issue is that it's dumb, constant, and has no feedback loops. There's no way for the system to get an average speed of cars in an area or often even whether cars are at a specific point. (Let alone things like emergency vehicles which often operate based on sound still.) Google kind of gets it by tracking phones. But this is fairly inaccurate. A better system would be to measure speed at both random intervals (since people act differently around lights) and at lights. But this has a groundswell of local resistance because every jurisdiction that does it immediately sets about ticketing speeding violators. But if you had that you could then dynamically control the waves and traffic.
Since cities don't they instead figure out the roughly average speed and then index on that, meaning it's constant (or semi-constant by time). This works roughly well enough but is ill suited to large surges in capacity and the problems get worse the more agents (cars) you introduce into the system. The famous example here is that in many cities a single car going 10mph below the anticipated limit can cause thousands of people to get caught in a traffic slowdown.
This is one of the promises of autonomous driving. Smart cars can transmit their location, speed, etc which can then be given to municipalities for their lights system and the cars can know what the speed the wave needs is. Though obviously we're not there yet.
I actually met a Chinese fellow who was trying to get his local government to implement smart traffic light technologies since they already have cameras/monitoring everywhere. But they weren't interested. Of course they have reason to not be interested based on how construction and government revenues work in China.
Ambulances have special lights that change traffic lights before they arrive at the intersection. You can install one on your car but it's illegal.
This is what I was referencing with "done by noise." I wasn't aware of the lights but I was aware of lights that are set to change in response to sirens. With the same "you can abuse it but illegal" thing.
They can and many places do. It’s much more common to set up green wave timing for cars than for bicycles (this is the first I’ve heard of the same approach being used for bikes instead).
Cities do in fact already do this for cars, where they can. But it’s hard to get it going in both directions unless it takes close to half a light cycle to move from one intersection to the next. I think bike and pedestrian travel speeds are better for that than car speeds, at usual block sizes.
If you’ve got multiple perpendicular arterials that intersect, then it’s very hard to do it on all of them at once, even in one direction.
It's called a 'Grüne Welle' and is done very often in german cities where I lived.
It works for main roads, where most of the traffic is.
I would assume it would be much harder for cyclists, since they don't drive in a uniform speed.
Wiki link:
https://en.m.wikipedia.org/wiki/Green_wave
Isn't this already the case for cars in areas where it's possible? It's hard in cities because there are a bunch of variables to juggle like not having traffic back up in specific areas. It seems like it would work even less well for bikes because I suspect the variation in speed among cyclists is greater than that among cars. Probably only makes sense in areas where bike commuting outpaces cars.
does that work in both directions simultaneously?
edit: yes, "On San Francisco's Valencia Street, the signals were retimed in early 2009 to provide a green wave in both directions, possibly the first street in the world with a two-way green wave for cyclists"
https://en.wikipedia.org/wiki/Green_wave
Why couldn't it? Imagine that in the east-west direction, you had a 300 foot line of cars every 1000 feet, and in the north-south direction, you also had a 300 foot line of cars every 1000 feet (forming a sort of dashed-line pattern). Then you just staggered the lights so that the dashes in the two lines were slightly out of sync, and each dash was moving at full speed through the gaps in the other line.
errr I meant a wave traveling east and a wave traveling west on the same street, which seems more difficult
Oh, I see. I haven't thought about that and I'm not sure.
Outside of a central city, there will generally be more traffic traveling in one direction during one part of the day than in the other direction. Signal time is usually timed with this in mind.
Additionally, most streets on which you would do this kind of timing are also likely to have a need for dedicated left turn lights which will further complicate having waves in both directions.
In the article it says they optimize for rush hour direction.
Yes in Portland https://www.portlandoregon.gov/transportation/article/394029
Will Scott Alexander mention my use of prediction markets to promote my substack, Mzungu, in his blog, Slate Star Codex, by 31st March 2023?
https://manifold.markets/merothwell/will-scott-alexander-mention-my-use?r=bWVyb3Rod2VsbA
Are you living in Uganda? Are you that kind of Muzungu?
https://mzungu.substack.com/about
Ok, but have you ever lived in Uganda or around there?
Sorry - yes have lived and worked all across East and Southern Africa, including Uganda. Lake Bunyonyi might be my favourite place in the world.
Ahh I love lake Bunyonyi too 🥰 so beautiful there and peaceful. One of my favorite spots also.
Reading the text exactly, I want to clarify the resolution criteria:
Will you resolve "no" if he mentions it here on AstralCodexTen rather than his defunct blog SlateStarCodex, as th text suggests?
Apologies, just realised now that is super unclear. It would resolve yes if mentioned here in Astral Codex Ten.
I can think of two people who got cancelled recently; I don't know them personally, but I don't think either person was an asshole or antisocial.
Hmm, what's your point? Are you suggesting all the people who have been cancelled deserved it because they were assholes? And even so, I'm not sure even an asshole deserves to be cancelled.
Thats an unnecessarily rude reply. At the risk of participating in the rude replies and taking a cheap shot, it seems you have the literacy problem in this exchange.
In your comment, Ragusea's "everyone of them was an asshole" refers to the group of people Ragusea had met who had been cancelled. In George Herold's comment "all the people who have been cancelled" refers to the whole universe of people who have been cancelled. Ragusea's set of people is a sub-set of Herold's. So there is no literacy problem in Herold's comment. Herold is asking if you are suggesting we generalize from Raguseas specific expereicence. Seems a perfectly reasonable question.
Just because journalists are backstabbing cut-throat assholes doesn't mean that social structure and power base isn't used to target other people too. Most cancellations are not aimed at journalists, and the being told the people at the top of this system rose up in this environment is not really comforting.
Yezhov was a monster who ran the NKVD, then was executed by it and replaced by Beria, another monster, who in his turn was also executed. You may say they had it coming, but between them, many thousands of innocent people were killed.
Meh, this is obviously just another "just world" cope from a person belonging to class that seems physically incapable of contemplating that mob justice is not justice and it doesn't work, it has been tried countless times and it doesn't work, even when it appears to work at the start it doesn't and sooner or later you will find out, typically when you are under the guillotine yourself, only for a poor generation of fools later to make the exact same mistake again later.
It's curious how thinly veiled the euphemism of "Anti-Social Tendencies" is, why don't you just be honest and come out and say "doesn't win popularity contests" ? I'm sure middle eastern or pakistani Atheists have plenty of "Anti-Social Tendencies" in the eyes of the wider societies they live in, so does most autists for that matter.
And it's hilarious how the conjunction between that and the kind of personality that does the cancelling works, those types are the very definition of "conflict-seeking, anti-social", so do they deserve cancelling as well ? should we follow every cancelling wave with a counter-cancelling that cancels the cancellers because they are/were anti-social ? but then we would need a counter-counter-cancelling that cancels the first counter-cancellers, when does it ever stop ?
Or, and this is likely what cancelling apologists believe, should the cancellers be a special class of people who are allowed to behave anti-socially at will in order to punish other anti-social tendencies ? but wouldn't this eventually end in very little anti-social behaviour punished, because most anti-social psychopaths would learn the lesson quickly and join the socially-approved group where they are allowed to be an anti-social asshole without repercussions ?
in some of my circles, that last sentence already happened a while ago
Unless they are family, you can always abandon them and get yourself better circles.
"Oh dear, I wasn't expecting the Spanish Inquisition". :^)
There was plenty to disagree with in that Adam Ragusea podcast. He really seemed to think that cancel culture is not a serious problem. He also seemed convinced that JK Rowling is a bad person, despite offering no evidence of her ever saying anything bad.
Nevertheless I enjoy his podcasts. It's good to listen to people with the 'wrong' opinions.
Are they defaming her? Are they doing so maliciously so as to harm her monetarily? Sounds like yes in both cases. That's illegal. Why is she supposed to just let that slide?
All the instances I could find of her suing would not be defamation/libel under US law.
So what? It is under UK law, apparently. You're not suggesting that contemporary US federal law is the arbiter of morality, surely.
No, but the majority of readers of this blog are based in the US so would interpret "break the law" to mean US law, which is not true.
Additionally, if Rowling can only hope to win her law suits in the UK, where laws are much more favorable to the plaintiff, then that weakens her arguments/complaints in my view.
Depends. Free speech has never included the right to call people names. The speech has to have genuine content to fall under the normal definition of the phrase. That's why yelling "You're an asshole!" at someone is not protected. No content, just vitriol.
Eh, hard disagree. Free Speech is any speech that doesn't have provable physical outcomes, and JK Rowling is a big crying baby if the claims about her suing detractors and forcing them to apologize are true. If nothing else, it reinforces the power and taboo-ness of the "trAnSPhoBe" word, while laughing it off will ridicule and diminish it.
Breaking your principles to win is only allowed under extreme circumstances, and JKR is a fucking billionaire, she's not in any immediate danger of losing from people calling her names on twitter. "Has to have content" is subjective, that's just like the wokes who say "Hate Speech is not Free Speech", well what's "Hate" ? whatever they fucking define this week.
For heaven's sake, if you can't prove it will physically harm people then it's free speech.
That's fine, you're certainly allowed your idiosyncratic definitions of a term, or philosophical concept. I'm just observing that it is not consistent with a definition that is held much more broadly. You can observe that Rowling is a goopy hypocrite by your definition, and she can observe you're an extremist adhering to unworkeable principles by hers. You're certainly entitled to assert she's breaking *your* principles (or definitions of "free speech"), but I don't think the evidence supports either that she is breaking *her* principles, or that she has no (consistent) principles at all.
I'm pretty sure that "you're an asshole!" is entirely protected under the First Amendment, and under the general umbrella of free speech as commonly understood. It is not an idiosyncratic definition of the term.
Specific accusations of criminal or immoral behavior *may* be barred, depending on the context, but it's not clear that "transphobe" in this context should qualify. I suppose the British courts will decide, for Britain at least. "Asshole", is protected everywhere free speech has any meaning.
Indeed, ensuring that any lowlife commoner can call the King an asshole if they want, is sort of the point (even if we might prefer they express their position more eloquently).
I for one appreciate a rich billionaire who cannot be cancelled fighting the fight that most people literally cannot afford to fight. If they are slandering or libeling people, and get called on it so that they stop doing it, that sounds like a win?
It's one thing to have an opinion that she's a bad person, or to not buy her books. It's another thing to accuse her of something specific that comes with both reputational and monetary harms, especially if it's partly or wholly false. I'm sure you remember the very recent attempts to get people not to buy some new Harry Potter game, because it's related to her, right?
I'm not on the side of the cancellers, so I of course I wish their bitchy mouthes would stop screaming.
The issue is that what JKR is doing is not significantly doing anything meaningful outside of her own interests, cancellers are cowards and they would quickly find easier targets to bombard, targets who can't sue. So what did we (== free speech enjoyers) gain ? nothing. JKR gained, but who gives a shit about JKR ?
If JKR was smart she should just keep taking flak and using it to increase the sympathy for her. She's literally unkillable, the campaign against the game did nothing and was free marketing.
Not unless it has zero basis in fact. A "phobia" is an *unreasoning* fear. I have a "phobia" if I fear black cats, because black cats can never (seriously) injure me. But if I'm afraid of tigers, even tigers in zoos behind bars, this is not a "phobia" because tigers certainly can do me harm.
You can argue that my (reasonable) fear of tigers should not prevent me from visiting the zoo, if I have good reason to believe the cage is stout, et cetera, but that's not the same thing.
So the question comes down to a factual question: *are* any transwomen men in drag trying to access women's spaces to assault them? If the answer is no, absolutely, that has never happened -- then anyone who makes that assertion can reasonably be accused of a phobia. If it happens so incredibly rarely that we can only find 1 in 100,000 cases where it has happened, then maybe we can also reasonably call that a phobia. If it happens infrequently but not rarely, we have a problem, and using a word like "phobia" begins to seem like more a denial of the problem and an attempt at tribal othering than a fair criticism.
"this is hypocrisy" does not equal "hypocrisy is the only sin" and I don't think caricaturing someone's argument so uncharitably will go over well here
Alternative hypothesis: the ones with "strong anti-social tendencies" are easier to *remember*. So when you think about who was cancelled, your memory returns the assholes first.
Imagine that Tom is a nice guy, and Bob is an asshole. When Tom's tweet made 20 years ago is taken out of context and used to ruin his career, Tom loses the battle without much of a fight. He finds a different job with less exposure to the public. Five years later, no one remembers Tom.
When the same happens to Bob, he writes a purposefully offensive statement, then doubles down. He provides lots of interviews to anyone who cares. Ultimately, he also loses his job, because there is only so much demand for contrarians.
Five years later, when you discuss cancel culture, people say: "well, I remember Bob getting fired for his opinions, but he was an asshole who has made many enemies, so he had it coming... uhm, I do not remember anyone else being cancelled... so I guess it is not such a big problem after all".
Falsification: You could start with making a working definition of a "journalist", then choose a year, and maybe a city... and then collect everyone who in given year lived in that city and fit your definition of a journalist. Then check what happened to those people afterwards.
"Blame it on Cain/It may not be his fault, but it just might be his turn" -- Elvis Costello
"They may not have been guilty of X, but they were sure guilty of something" is a frequent fallback by systems of justice that are found to be punishing the innocent.
It's so convenient, and so frequently fails on close examination, that I think the presumption has to be against it.
(And it's practically a cliché for the person making that assertion winding up the next in the tumbril.)
It's also a dodge around determining what the actual objectionable behavior is, and finding a way to deal with it openly.
Strongly agree, also it should be noted confirmation bias is always a thing, especially when you're remembering something from years ago.
This is exactly the kind of argument made about prosecuting black people 80 years ago. Do you not see that?
Judging people as bad regardless of whether they have actually done a specific bad thing they are accused of certainly is. Searching for new reasons to justify your hate doesn't make it okay when you find a new reason.
Human behaviour can be excused because nature is amoral ?
So I can sexually harass women at a late hour of the night wearing provocative clothes to my heart's content ? it's not just or unjust, it's just mechanical consequences.
"You're completely wrong in this case, and you've been completely wrong about every single thing for which I have passive knowledge of."
Oh, treb's a journalist then?
I think Lo-Fi is calming because distortion, filtering out frequencies, and addition of white noise makes the music less punchy and more sonically static. My kids say lofi sounds old or nostalgic, and they have hardly heard any 80s music. There is a youtube channel called something like "music on a record player in the other room on a rainy day." The lofi effects mimic music heard in the distance or in the other room. There is a nothing like hearing your favorite song waft through the windows from a passing car on a warm night.
My understanding is that most internet Rationalists do not listen to Japan Audio Society High Resolution Audio via closed In Ear Monitours and performant Digital Audio Player in partially optimised home residential environments. And thus they are already implicitly opting in to an essentially lossy, low-fidelity, high-distortion setting for their listening sessions. Some use sounds and noise qua anxiolytic or mild anxiogenic to be pleasant. But Kevin Simler of the Melting Asphalt web log almost certainly got it wrong re his analysis of modern human brain organs and the radically shifting landscape in the Qobuz art market for High Resolution Music.
Yeah, imperfections make things more naturally listenable. Since you mentioned noise, various types of noise (there are many) have been used as a deliberate production tool (e.g. side-chained pink noise against snare drums) in almost every 80's rock album and still now. People fall asleep to white noise etc.. Noise is an entire genre. Why is there a cult following around The Caretaker "An Empty Bliss Beyond This World"? Partly because it's an album designed to make the main musical elements as uninteresting as possible, but instead focus on degradation, hiss, and other analogue artefacts that make things sonically compelling.
The Lo-Fi thing has become a total joke though, honestly. Zoomers will binge-watch Stranger Things and think the sound of an 8-voice polyphonic synth being multi-tracked and passed through $10k worth of studio-grade effects is "low-fi" because the show is set in the 80s and back then technology was meant to suck. If this demographic ever had to listen to a cassette of some anonymous blues guitarist they would probably writhe in pain, because it's not the fidelity per se they are referring to but the nostalgia, except it's a nostalgia for a time they never experienced in the first place.
Digital recording at the professional level emerged in the late 70s and was ubiquitous by the late 80s. At the same time analog playback was still the norm. Yes CDs existed but most people didn't have them yet.
In the days of analog recording studio time and the physical media that the music was recorded and mixed on was expensive. Mistakes by artists and engineers cost real money on a product that often was not a great money maker; most acts did no go on to be stars. Producers had to come out of pocket on the front end of a new act that was a risk. This produced an industry of skilled professionals, both behind the boards and the instruments. An alarming number of very well remembered musical acts don’t play a single note themselves on many of their early records despite the credits. Even for established acts “ghost” musicians, paid for secrecy, remained a norm as long as analog systems did. It was too expensive to let popular musicians keep making mistakes in the studio, plus studio time is reserved in advance and there is a chance you will get no usable recordings at all if you let them try. If you enjoy rock music from the 50s -60s, you’ve listened to many songs that aren’t being played by the good-looking young men on the album cover. There’s a good chance its actually the Wrecking Crew. (As an aside, country music largely avoided this convention by just being honest about it. Country promotes the front man/women, not the ‘band’, who are largely interchangeable pros who specialize in studio or tour, but seldom both.)
With digital recording, the ‘wasting money and time’ element largely assuaged. I heard an interview with Eddie Van Halen once; his career spanned this transitional period. With analog he would score out all his solos in advance, practice them a lot, go into the studio on a special day, completely sober, bang out 3-5 iterations and live with whichever was the best one. After digital recording he had a small guitar studio set up at his house and could play 100+ variations and keep the one he liked. Still, the final product was analog, so you still needed those pros behind the board to make the final product sound good. After all, someone was still coming out of pocket on this physical media before it ever sold a unit. There was still risk.
Things are different now. I have a digital recording “studio” in a spare room that cost a few grand. Its capable or producing sound quality unheard of in the analog days. I, however, am not. For me this is a big part of why older media has a pleasing “timbre” or whatever. Everything is polished as well as it could be with the tools of the era. Everything that we still like though, survivor bias and all that. I had a local DJ who’d be in his 80s now assure me there was plenty of garbage back then too that time has filtered away.
On the other hand, the modern super-clean pro-tools enhanced, quantified to a click track and mixed on an overpriced Macbook sound is (potentially) available to everyone who can scrape a few grand together and a lot of it is crap. There is no up-front financial risk to producing music apart from the gear, we mostly just waste our time. Only the very best work product from the past sticks to our collective memory, but right now anyone can have a Soundcloud.
"Only the very best work product from the past sticks to our collective memory" -- this is true of everything. Movies, literature, TV shows, whatever. I like to watch old Hollywood movies on TCM sometimes and it is striking how clumsy and poorly acted a lot of them were. For every "Casablanca" there were 10 or 50 generic potboilers that now all blur together as a wall of cliches.
Sticking to music, years ago I stumbled on a website where you could click and hear a minute each of the USA Top 20 or something hits of each calendar year. I tried it for the years when I was in high school and it quickly became....painful! Because I was being freshly reminded not of the great songs that I remembered anyway, but of all the other absolute dreck that I'd forgotten but now had to admit my generation also eagerly listened to at the time. And that second category way outnumbered the first....
Japanese aesthetics traditionally prize imperfections in some contexts, part of what is meant by the term "wabi-sabi". Interesting area to explore!
In terms of music, though, I think part of the reason those imperfections are prized is that they can add complexity of texture, plus variety, to the sound. People find complexity enjoyable. Also, certain kinds of "lofi" sounds more specific positive aesthetic qualities. For instance, downsampling can reduce high frequencies which would be perceived as harsh; it can impart a sort of "unreality" to the sound, in a similar way as cinema's low framerate; and it can give a quality of unity to disparate sounds.
The specific innovation that precipitated the present low-fi thing might be that GarageBand etc. gave the masses decent-sounding DSP (digital signal processing) for the first time and *suddenly* everyone wanted the "ugly" sounds their computers made before
Conversion (for reasons other than marriage) is strange to me these days, one doesn't think of religion as something you can shop around for but of course that's how it first proliferates. Rationalization ought to have nothing to do with the validity of a faith, according to the faiths. Belief is not (and ought not, according to the gatekeepers) be contingent either empiricism or ulterior benefits - there is but the "proof" of selected scripture, fearful calls to accept dogma lest one not find Salvation, and fervor (in a group or otherwise) being explained as spiritual experience. Of course in reality there are other reasons many people have converted and changed the course of History.
For us plebs there are only a few possibilities I've encountered: a) one believes because they "felt God / the Holy Spirit / The Truth" once exposed to a certain brand of religion, b) one believes because most of society does and it's a useful heuristic, c) one believes because it is comforting, for some reason, d) one believes because it is useful for some other reason. The explanation given is almost always 'a', with no cognitive dissonance over the fact that it's the same given with other competing religions. The defensive reasoning is that those other followers are wrong, what they experienced is not Truth, but some "quasi-Truth" (or nil, they're faking). And yet, this could not be for the believer in question.
Among the more Liberal practitioners (cultural Catholics), there is popularly a kind of watered down belief to help reconcile with multiculturalism, which goes "all religions are equally correct and allow communication with God". This absolutely, definitely does not come from the top / the Church. However the "aesthetic" / cultural type of practice does seem more agreeable and is less likely to breed hate than fundamentalism, which necessitates that everyone else is wrong and may or may not be heretics.
If the validity and Truth of a religion is evaluated by your subjective experience, and not for reasons like "x/y/z aspects of this religion appear more rational to me", can your faith really be confident, or does it functionally manifest itself more like a gamble for the afterlife?
Why does God let evil happen if he doesn't want evil and he is able to prevent evil and he is aware of all the evil that happens and keeps happening and will happen ?
Why does God create atheists if he knows they're going to end in hell ?
Why does God create a universe that can be parsimoniously and convincingly explained without him ? Why is his idea of evidence a bunch of books that can be traced back to human writers, and of which there are thousands filled with conflicting and contradicting stories, and a bunch of miracles that nobody living in a long long time ever saw, heard or otherwise reliably sensed ? Why can't God just write about fusion or programming languages in one of his ancient holy books to utterly and definitively prove that they have unearthly origin ?
Why does God need books, words and intermediaries to speak to us at all ? He can't just utter his thoughts in our brain directly ? Even human musicians can evoke meaning and emotions in me without opening their mouth.
Why does God not endow his adherents with any obvious advantages or virtues ? Why are catholics not richer or more intelligent or better educated or more honest or more beautiful or stronger on average than the adherents of any other religion ?
Why is Christianity historically recent ? What was God doing during the thousands of years before Christ and Moses and Abraham ? Do the people born in those times (and during all the times but in places not accessible to Christianity) go to hell ? heaven ? both would be unfair.
Why is the Universe ? what purpose does it serve ? if it serves no purpose, isn't he stupid to create with no purpose ? If it serves a purpose that he is reluctant to tell us, isn't he cruel for creating us with brains that love to seek purpose then dropping us in an existence with no obvious purpose ?
How can I trust that God loves me when there is no one of his own class to love ? How did he know about love ? Love requires that the object of love is something or someone seperate from the lover, but nothing is seperate from God. The entire Universe is a piece of him, his love is literally narcissism. Do I "love" my arm ?
Why is God ? Is he an accurate evaluator of his own usefulness or impact on the Universe ? that is, can we trust him to commit suicide if he finds that he is a net harm to existence on average ?
How does God know that there is no GoD above him that wants him to worship just like he wants us to worship ? How can we know ?
Even for an agnostic like myself, a lot of your questions obviously centre round the theme of free will granted to us (and other animals up to a point, and even inanimate matter in its random aspect). If there was no possibility of evil, then we would be like mechanistic puppets with no choice and thus no merit either.
Also, God (if he exists) caters not just for us but presumably for all future generations, and what seems like an evil to us, such as a volcano or earthquake that kills people today, may be a benefit to people in the future, in fertile soil for example or fresh land created by plate tectonics.
There is a verse in the Bible's Book of Genesis, after the account of creation, that seems to me revealing: "And God saw that it was good". Why would that be worth mentioning if it was a foregone conclusion? It suggests to me that the author intuitively understood (or was divinely inspired to know, if one chooses to believe it) that God is an experimenter, and endued people and matter itself with a tendency to make mistakes. He thus chose to be uncertain of the outcome of his creation, with most outcomes perhaps being unfavorable, like evolution. As the Man himself said "Many are called, few are chosen". ..
Free Will is a paradoxical notion that few (if any) understand, I certainly don't, do I have free will ? I didn't choose my body, whether I prefer to be born or not, where to be born, what language would I speak first, just to pick a few things. Those things pre-determine metric tons of things about me. Where is Free Will in those things ? If God is such a fan of Free Will why does he create people in situations where there is a single course of action (the other being Death), did God think about the Free Will of all those millions of Germans he created between 1920 and 1950 ?, the tens of millions of Russians he created during the same period ? did he think about the countless poor women who sell their bodies to eat and feed their families everywhere since time immemorial ?
The Universe sure as eggs doesn't seem so optimzied for Free Will, and the vast majority of our life is spent doing shit we would rather not, so what's a single forcing function more ? One that enables Good and not Evil and cruelty for a change ?
Also, if Free Will is so good would it be present in those select few who go to heaven ? If yes, then does that mean that those in Heaven are not, in fact, infalliable and can be tempted to commit sin and suffering while in Heaven ? Wouldn't that mean that Heaven isn't so special after all ? Or does God have a secret sauce that makes Free Will works while not making us dumb machines, in which case he can save incalculable amounts of suffering by applying it to this Universe instead to make it more like Heaven without sacrificing the precious Free Will ?
>what seems like an evil to us, such as a volcano or earthquake that kills people today, may be a benefit to people in the future, in fertile soil for example or fresh land created by plate tectonics.
This, as they say in my native tongue, is "an excuse even more uglier than the wrong committed". So God's excuse for all the endless cruelty that plagues this Universe is that it enables better things in the future ? The OMNISCIENT, OMNIPOTENT and OMNIBENEVOLENT creator, can't make good (like he made the entire universe from nothing) without inducing untold amounts of evil in the process ? Why would people worship a medicore engineer like that loser ?
There are centuries of theist philosophers who address these exact questions far more eloquently than can be addressed in an internet comment. I'm seeing your future: it's a trip to the library.
With all due respect, the philosophers you talk about are cheerleaders for dogma. They pretend-play thought, but they are really doing propaganda/sophistry, because the conclusion is fixed in raw crusty stone before you start a single gear of your brain into action.
Of course, this can still be useful. Fixing the conclusion, although done with shameful coercion and condescension, acts as a "Rules of the Game". i.e. "Let us all **pretend** that God just unquestionably exists, let us make the punishment for suggesting otherwise Death or Worse, just to strengthen the pretense of course, what are the answers to <questions> given all of this ?", it might be a good and useful game, depending on the exact nature of <questions> and how they are phrased, and tons of smart people participated in this game, I'm sure.
But make no mistake, this is not how thought should work, and any intellectual work or play done under those cosntraints is inherently suspect and lowly.
Yeah and they mostly do a piss poor job in terms of providing even slightly reasonable answers. Even the supposed greats.
>you see that people who regularly participate in religious services generally report better life outcomes in terms of health, happiness, and so on.
This seems true for any traditional/religious community, not just Christianity in particular. Which to me is by far the strongest contemporary non-metaphysical argument for religion vs. atheism. Atheism may be better justified epistemologically, but until atheists are able to use their enlightened rationality to actually implement a better society, their smug sense of superiority is undeserved.
>This seems true for any traditional/religious community,
My exact question was :
>>>Why does God not endow ****his adherents**** with any obvious advantages or virtues ?
Assuming God is a fan of logic, Religions can't be all true, a single religion being true will invalidate almost all or the vast majority of others, by pure deduction. So "his adherents" must be the adherents of a single religion or a very small group of related ones, and given that *all* followers of any religion reap the benefits you speak of, hence my question.
Religions and traditions are good to people because this is how people evolved, almost everything monkeys do together make monkeys happy. Singing together, eating together, just watching paint dry together make us happy.
And this atheist, for one, does love non-harmful traditions, and does think that people need something bigger than themselves, to worship and live for and defend. Start with your family : I personally think men should worship their mothers and wives, and women should worship their fathers and husbands. Worship something you can see, whose effects on you are measurably Good, whose obvious flaws are not something to deny and defend hysterically against all reason, but to acknowledge and embrace and tolerate. Next you can have a go at more abstract vaues : Love, Mercy, Justice, Beauty. Though we obviously still need personification : the godess of Mercy can be a nursing mother, and that of Beauty can be a small girl.
Where Christians lose me is when they make the bizarre leap from this to "A God whose evidence of existence is a 2000 year book written in an extinct language by known human authors, and a mother whose claims of virginity violates every single thing we know about how mammalian reproduction works", why ? just why ? there are literally millions or billions or infinite ways to worship, why this particular arrangement of extremly arbitrary collection of claims and practices ? I will tell you why : because over the course of 2000 years various coalitions and authorities saw it fit and useful to brutalize people to believe it, and without them Christianity wouldn't exist. Just like the countless religions and cults that come and go on the daily since humanity's dawn.
>atheists are able to use their enlightened rationality to actually implement a better society
What a bizarre statement, when did atheists ever claim that their specific differentiating-from-other-groups goal is to make a better society ? this is **Everybody's** goal, and the religious are no good at it than any other group of people. Religions are hilariously and notoriously bad at co-existing, for example, since you can't co-exist with people you deem to be misguided and blind to the fundamental truth of the universe. This mainfests in multi-religious societies for example (Lebanon, Iraq, India), but even a 100% percent religous group that is serious about its religion often finds international relations difficult.
What "Atheists" (a ridiculous loaded term that is obviously false if interpreted naively, every atheist is really more like an agnostic who thinks most religions are bullshit) really want is for you to not surrender your life and your morality to the dead hands of mad dead men. Muhammed and Jesus were just ordinary people full of bullshit, their bullshit succeeded by brute force, and anything good they said was a truism during the times they lived in. Their morality is so far below what you can achieve by listening to your mind and heart and just waking up everyday with the intention to be good. Dead men shouldn't control your life and govern your interactions with people and the universe.
My favorite thing about atheism is that religious people are already intimate with it : they practice it with every single religion except the one their mother raised them on. Imagine this, but better : No religion gets a pass, all cruel and false bullshit gets shat on. Marvelous, isn't it ? We have to call this tradition something, maybe, maybe, "No Religion " ?
>and the religious are no good at it than any other group of people
The religious established every single first world country, as it happens.
>What "Atheists" (...) really want is for you to not surrender your life and your morality to the dead hands of mad dead men.
Maybe not so much these days, but plenty of them weren't opposed to the idea, what with Lenin's mausoleum still standing on the Red Square.
>The religious established every single first world country, as it happens.
They also established every single other country, including the horribly failed ones, so that's just hindsight bias along with cherry picking in one big fallacy smoothie. Why do you get to pick the Netherlands or Denmark to speak for Christinaity and I don't get to pick Venezuela or Mexico or Namibia ?
Also "first world countries" were not always "first world". Every single one of them have a long history full of... "fun". Their first-world-ness is a consequence of a huge complicated mess of reasons that start with "Make machines that can work 24/7 tirelessly" and ends with "profit". Also mix in colonialism and a bunch of random noise for good measure.
>what with Lenin's mausoleum still standing on the Red Square.
Fair, but again, isn't that a bit too much to ask of atheists ? Herd instincts are ingrained so utterly deep inside every human that I despair of us ever reaching enlightenment and independence without physical brain modifications. That's how lost I think we are.
Atheists identify a single, very important, way that our herd instincts can drive us wrong, and are against it. They cannot and should not be depended on to identify *every* single way we can be mislead and driven against our better knowledge. To save you from Totalitarian Religion, you need Atheism. To save you from Totalitarian Communism, you need Anarchism. Why would you expect 100% intersection between the 2 ?
I like Yudkowsky's definition of rationality including both epistemic and instrumental parts. If your beliefs are so correct, why aren't you rich (in utility)?
Is he rich? I watched him rebuff a couple soft social entreaties from Elon on twitter. I was kind of gobsmacked by it to be honest. Does he have a secret fortune or something he’s leveraging to fight AI Risk from trading or something?
Why specifically Catholicism? I believe in God —with one of those long explanations that are like explaining your dreams to people where they just are listening so that you will listen to them explain their dreams— but not a specific Catholic God. Any reason for that tradition in particular?
I had a religious experience in my mid twenties that flipped me on it but I also know that isn’t evidence or even compelling to anyone else but me so I go through apologetics when taking to others.
Is there an emotional benefit you’re getting from the Catholic Church in particular? Like I don’t even have faith anymore per se because I believe in God the way that I once believed my parents will come eventually if they leave me home alone for a few hours. I know that is probably anti compelling to some people here but it’s where I ended up.
I don’t buy into a lot of the specifics of how the belief in God has been implemented though.
If you are just hedging, Judaism doesn't necessary hold that Catholicism is better than atheism. As with all things Jewish, there's a debate but trinitarian versions of Christianity are generally considered idol worship, and one of the 7 Noahide laws applicable to gentiles is to not do idol worship. So to Jews, you being an atheist or Unitarian or Muslim would be preferable to being Catholic.
But (and I don't know anything about this part) I believe Christians have a higher penalty for not being Christian that involves eternal punishment in Hell, and we don't have anything like that for gentiles who don't fulfill the Noahide laws, so maybe from that sense your Pascalian wager makes sense.
Christ definitely didn't found a church. I think the surviving texts are pretty clear on that.
Which texts? This seems to be covered pretty straightforwardly in the New Testament.
The delta between "Catholic Church" and "Go and baptize people and tell them to live as I said" is super super large.
Not even getting into the issue that the passages saying Jesus said that were written after it had already started, by the people who were benefiting from the hierarchy of such a system.
You would expect if a church was his main goal more of his life would have been about that...
I mean whatever, it is hard to take the story seriously at all as traditionally presented.
So you believe that there was a church founded in the text and just don’t think it was the Catholic Church? This is different from your original statement.
Curious, I think Leah Libresco went through a similar journey. Does yours resonate with hers?
Well, I didn't see a previous version of this, so presumably you should have a good answer to: what made you change your mind? And was this None your usual Rationalist atheism or some softer agnosticism?
In case it's interesting to you or anyone else reading this, I've put my old review of Feser (from several blogs ago, now offline) on a Google Doc: https://docs.google.com/document/d/1Y3UHArTNsj684KtA2rho-430AKh6Rxh6ruhlt1PKOhY/edit
I only got through liveblogging the first few chapters, though.
Thanks!
How do you deal with the tension of the idea that we were made for a certain end by a mostly stochastic/amoral/etc process (i.e., evolution). (Not an attack, but a genuine question I'd love a coherent answer to)
I guess my question was not super clear. Evolution is an aimless process that selects genomes from random walks in genome-space. It is perfectly consistent to posit that God put natural processes in place to make it so that this process would lead to humans evolving for a particular higher purpose. The tension arises when you consider that humans are largely an accidental by-product of this process, and the same process could have (and may have, in different planets/eras) led to the rise of vastly different entities to ourselves. Given all the selection pressures in place, if we'd turned out to be baby-eating aliens, would we not be equally inclined to interpret our characteristics as meant for some higher end? But surely not all products of evolution can be meant for some higher end - some must necessarily be evolutionary dead ends, if not steps in the "wrong" direction. Do you have an answer to trying to tackle this particular concern?
This was interesting last thread but now it's starting to look like spam. I'll let you answer questions this time around but please don't post it a third time.
Comments sections and such are the natural habitat of the contemporary catholic convert. There's a sense in which the arena of purely written polemic offers a home-team advantage (and has for centuries) to first the catholic and now the tradcat, e-cat, and others of the ilk. There are no other domains where this is the case, so it's natural of you to want to maximize your enjoyment here where it's easy.
Also, can you easily link to last thread?