I don't know the specifics, but you probably want books on project management. In the absence of a structure large enough to include project managers, you will have to wear a bit of that hat, which includes negotiation, estimation, and presenting a case to a decision-maker.
If you remember it fondly then it probably does have value. But had you seen or heard similar stories at that age before you saw it? The context matters. If the first time someone is confronted with death and cruelty is watching the 1942 Bambi, it will seem a monstrous horror. But most kids who see it have already seen birds killed by cats, heard parents explaining where bacon comes from, or headshotted opponents in a tactical FPS, so Bambi is just a little sad, not a searing tragedy.
I agree that context matters. Protecting children by taking away the opportunity to watch a sad movie makes it even harder to watch the next sad movie. Before you know it you have university students asking for trigger warnings.
I saw Bambi when I was five and I cried. I still cry at movies but I know that emotions over fiction pass quickly. I can learn from them and enjoy them even while experiencing strong emotions. Better to learn early, I think.
I believe children should be given dangerous situations with adult supervision actively.
Have a 3 year old stir a boiling pot while explicitly telling them where to put their hands for example, while you wash dishes, slowly one at a time.
Emotions? Really? Yes bad emotions are a risk with good art but life won't be so kind when you look away completely tho; but you should be taking risks far more then even that simple example
3 year old and a boiling pot, really? I was going to fully agree with your opening sentence but I don't think you have to go *that* hard. Great if it worked out for you and all, but wow, that's really young.
If it was my kid Id would've taught him how to use some kind of small knife while cooking(buttered noodles get old and his salting was usually overly done), I think he 5 now; I think he's the smartest of my siblings children at that age if the close to the most stubborn (which is many, mormon family sizes). If you didnt keep those hands busy he could very easily leave the house in between any short lapse of attention and my sister needed someone to tire him out with a new born.
Children want to help and be taught, until its crushed out of them.
There's definitely something weird about memories of movies watched as a child. I find (and others I've spoken to agree) that when you rewatch a movie you'll be amazed to find whole sections, whole themes, whole subplots, or whole really obvious stylistic aspects that you have no memory of. This holds even for movies I watched *dozens of times* as a child!
It's really quite bizarre. It makes me wonder if I'd watched an edited version. But no, I didn't, I just tuned things out apparently, and most children do. Just one example: a lot of 90s kids movies are full of slapstick, throughout the movie. I hate slapstick, and yet I remember liking these movies and had no memory of the sheer amount of it. I must have just tuned out the parts I didn't like.
The same applies to bad dialogue and bad acting, though part of that's lack of knowledge of what's bad and good, but not all of it.
As another parent, I don't know anything else about the movie, but I'd strongly consider showing it to him. It's important for your kids to get in contact with the full range of human emotions. In general imo most parents are on the far end of safetyism and children can handle much more much earlier than they think if you actually teach and help them along the way.
Funnily enough just yesterday I've had a related discussion with my wife; Our daughter (3) watched a children's tv show about a family of dinosaurs we haven't seen before, and then we notice that several episodes go like this: 1) the protagonists meet other dinosaurs which are just as intelligent as them 2) they get into conflict 3) the protagonists win the conflict and ... eat the opposition.
At first my wife was shocked, but after a short talk we both agreed that it might be a good counterweight. We had been complaining about some other tv shows she had seen before which often portrayed even dangerous predators as actually nice deep down and that all animals can get along great through the power of friendship. We know even quite a few college-educated adults who have completely unrealistic ideas about animals (along the lines of "bears would never attack humans unless unnecessarily provoked, anything else is animal hating propaganda") and it often strongly biases their politics in a bad way.
Obviously you still need to talk with the kid and contextualize what they see, but if you do they can handle it just fine.
I agree about kids getting in touch with the full range of emotion, but most are quite able to do that without seeing, when quite small, a movie with a tragic death in it. Think about real kids. They cry far harder and oftener than adults do, and while some crying is set off by anger or frustration, lots is set off by grief. Kids grieve when they lose a favorite toy, when they suddenly start missing their parents when at school or with a babysitter, when their feelings are hurt during play -- and also when they see sad things happen. I can remember a few times when I was small and cried in sympathy when some other kid fell down. I saw my daughter do that too when she was small. And while the things kids cry with grief over seem small to adults, they seem huge to the kids. Don't you remember being a kid, how it felt?
My introduction to death was when I was about six, being brought by my mother with her while she went to visit the deadhouse (as the hospital mortuary was referred to), where the body of a neighbour was laid out before being coffined . Everybody knelt and said a prayer, then afterwards there was the funeral and burial, and I saw the coffin being put into the grave and the grave filled in.
I did have a couple of dreams about death and being buried myself after that, but I put it together that hey, when that happens, I'll be dead so I won't know or care.
So I find it hard to think that a movie could be very traumatic as an introduction to loss, when the Kangaroo is hopping away in good health and still alive, just not going to be around Dot any more 😃
Absolutely, though, I wouldn't let six year olds watch horror movies or anything with explicit violence.
I think precisely because it's a smaller problem, it's more likely to haunt a child. Kids often have separation anxiety, and don't commonly have a self-preservation instinct (or any other sense of their imminent mortality). We had a dog die this summer and then got a new one; the conversation with my five-year-old about dog cremation was morbidly entertaining, but having the new dog run away from me on a walk and go missing for fifteen minutes clearly made more of an impression.
This should not be misconstrued as an argument against watching the movie; if nothing else, practice separation is presumably actively helpful in handling real separation anxiety.
My intuition is that certain kinds of media and themes will shock and/or distress kids the first time they see them, regardless of the age they are when they're first exposed to those themes. I've known people who were exposed to horror movies when they were six and people whose parents sheltered them until they were sixteen, and it seems like they had roughly the same "trauma" response to their first horror movies. While it might be harder for a six year old to dismiss the "trauma" of their first horror movie...I dunno. Teenagers are often extremely good at leaning into and even enhancing their own "trauma" with rationalizations for why scary stuff might be real. I know I managed to be as scared about alien abduction at 14 years old as I was of monsters in the closet at six.
You weren't asking about horror movies, but probably exposure to the concept of loss is similar. It's going to hurt regardless of the age your kid is when they first experience it as a theme. Five seems like a reasonable age to be exposed to the concept of loss and why it makes people sad, especially if it's handled in a beautiful way. And while I'm not a parent, my intuition is that it's better to have initial exposure to the themes of loss via media rather than a sudden shock of it in real life (the sudden death of a pet or grandparent, etc).
Edit to add: Be prepared for your kid to be heartlessly disinterested in your beloved Dot and the Kangaroo. He might be unforgiving of the rough animation, slow pace, etc after being trained on 2020s modern media (presuming you've allowed them to see any).
He may not cry, so go ahead. It's a way of introducing children to the idea of parting and ending of things, and at least Kangaroo isn't dead, she's just leaving to let Dot return to her human life (and it's open ended to the possibility that they might meet again later).
Sometimes people leave (they move away, they die) but while that's sad, it's not a bad thing and you go on with your life.
I think it's important to introduce this stuff to kids early. Tragedy sticks with you, and I look back on the tragic - even borderline traumatic - stories of my childhood as the most enriching.
I was not ready for Bambi when I was 5. I was ready to learn about death, but the movie presented it in the most traumatic form imaginable: the death by violence of a mommy. After seeing Bambi I understood the reality of death better, but it really left a huge dent in my sense of wellbeing. I tormented for years by stories that formed in my head about little animals left in the nest grieving, terrified and starving to death because their mothers did not come back. I I think you should err on the side of caution with kids about matters like that.
When I was college age I taught nursery school for a while part time, and when the school guinea pig died we showed kids her body the next day, and answered their questions, and let them examine her body or pet her (and then wash their hands really well). We also told parents about the guinea pig's death and how we'd talked with the kids about it. I think that was a decent introduction to death for the kids.
Movies hit kids differently I think. My son helped us bury his grandparents dog, and I think that was a positive experience. I would be reluctant to show him Bambi though.
I remember being very captivated by Bambi's father, though, and how he sort of revealed himself to Bambi (and to the child viewer) - which I don't think would have happened without the death of his mother. I remember also that frightful word - "Man!" Which implicit lesson re nature and loss has only grown more true as time passes.
Once I was sitting next to one of the local springs with a den of Cub Scouts among others, I think it was, listening to a park staffer give a little talk about the "spirit" of the springs, a variety of salamander, and she asked the assembled group if they knew what the salamander's chief predator or threat was.
We all sat awkwardly unable to answer for a few moments.
Then a kid piped up bravely, and with something of that Bambi drama: "Man?" And I think we all, adults and children, thought to ourselves, yeah, that tracks.
"Uh, good guess? Actually, it's crawfish", she said.
Dumbo seems like a good precursor to Bambi. It's wrenching but Mother doesn't die.
I think you're right to ask (and right that there are absolutely movie experiences that can traumatize young kids, who sometimes don't know or can't fully process that they aren't witnessing real events), and I don't think it's a question that can be settled on the basis of some principle. It's not actually "should we shelter kids from difficult feelings or toughen them up to real life?", it's "is THIS child ready to have a salutary, if sad, experience with THIS movie?" You know your kid and his sensitivity level--does he remind you of you at that age? Does he tend to take things in stride or does he have intense feelings sometimes that don't make sense to you as an adult, does he perseverate or worry to the extreme about things related to loss?
Mine used to have very intense feelings about lost objects, which he tended to personify. It wasn't "I'm sad I don't have this thing anymore" so much as "this thing will not be OK without me to take care of it." (He also once in awhile had a panic meltdown for incomprehensible reasons, eg that a toy was lying at the bottom of a wading pool. Well that one was somewhat comprehensible, it clearly held symbolism for him.) He was an incredible packrat because getting rid of possessions felt to him like, maybe, dumping a pet by the side of the road--you don't do that just b/c it's old and not fun anymore, and in that same spirit of care we had to keep old toys, papers he had scribbled on... and oh my Lord, we left behind a rotting stick at the creek once whose tip was shaped a little like a horse's head and he brought it up for 2 years whenever he couldn't sleep. He's outgrown this completely now at 11, thankfully. All that to say, these things seemed to be proxies for him for a deep aversion to the idea that irreversible loss and sorrow exist. I literally went back and tried to find that damn horse-head stick because he could not. stop. thinking about it. (And believe me I was trying to ease him along into accepting that sometimes things are just gone. And eventually he did.)
So... I probably wouldn't have shown him the kangaroo movie at that age. (I screwed up on a few movies. I wanted him to love The Iron Giant but showed it to him too early and he thought it was sad & scary.) But when I saw him shift over to being less sensitive, which might have been around 7, then I probably would. It's not an either-or question, it's a question of when.
But if none of this rings a bell at all, if it all sounds so unlike your kid that mine just might be a space alien, maybe you should just go ahead! I do think a lot of kids could handle themes like this at 5--or be sad but in a way they can feel is helping them, as maybe you were.
Your story about your son is a good example of why you don't have to deliberately introduce most kids to tragedy and loss. The little losses of their lives feel huge to them. They are very emotionally alive.
It was an old favorite from when you were five, so you know for a fact that at least one kid can handle it. The question to ask is whether you think your son is meaningfully different from you in his ability to handle sad stories. In general, I agree with the majority here that it's good for kids to encounter difficult emotions in fiction. And if it becomes too much, you can always pause the movie to talk about the movie and give him a chance to decide whether he continues or not.
It was an old favorite from when you were five, so you know for a fact that at least one kid can handle it. The question to ask is whether you think your son is meaningfully different from you in his ability to handle sad stories. In general, I agree with the majority here that it's good for kids to encounter difficult emotions in fiction. And if it becomes too much, you can always pause the movie to talk about the movie and give him a chance to decide whether he continues or not.
Yeah I think they need the opportunity to experience and rehearse different kinds of emotions in a safe manner while they're developing. If they have a strong reaction then have a discussion with them afterwards to help them process and contextualize the feelings, but I don't think that shielding them from children's movies is going to help much in the long run.
One thing I think is pretty true is that not only *can* most humans have all of the standard suite of human emotions, but that we *will* have them with some regularity, because the brain doesn't like to lets parts of itself just atrophy inactive forever. if you don't have an y appropriate targets for an emotion in your experiences, you will attach that emotion to *something* going on in your life, in a way that may be less appropriate and more damaging than just having an actual correct target.
For negative emotions, movies are probably a good target because they provide accurate contexts to attach those emotions to, while having those events not be something in your own life that you have to constantly fear or obsess over.
I've watched several videos of the SpaceX Super Heavy Booster going straight back to the launchpad, which is one of the coolest things I've seen currently happening in the space program. A question I've never seen answered: Why could they never recover the space shuttle fuel tank like they could with the rocket boosters? It seems like a huge piece of equipment to throw away and replace every single time.
The SRBs (solid rocket boosters) were jettisoned at a speed of roughly 4,800 km/h, while the ET (external tank) was jettisoned at over 28,000 km/h – close to orbit – so reentry was much more violent.
(numbers from Claude, so double-check them before building your own reusable launch system)
If you are going to post LLM output to make factual claims, please do us the courtesy of performing the verification yourself, or otherwise leave out the supposed details. We can all type a prompt into a chatbot. We also don't need more imaginary numbers floating about for search engines to find and become the foundation for future myths.
> If you are going to post LLM output to make factual claims, please do us the courtesy of performing the verification yourself, or otherwise leave out the supposed details.
Would you have felt better if I had posted numbers from a superficial Google search? Or from Wikipedia? How thorough and well-sourced would my verification have to be according to your standards?
> We can all type a prompt into a chatbot.
Then why doesn't everyone? State of the art LLM chat bots are perfectly capable of answering simple questions such as the above, and in great detail – enough details to enable further, independent research and verification, if desired.
People are asking questions here and hoping that someone has the motivation to research a real answer (or has expertise to share). It used to be that such questions were accompanied by "and a cursory search came up with these links which leave me confused" or a Fermi estimate, and it would be nice to return to such standards. Adding unverified LLM numbers as answers doesn't help, nor would "my random friend said". True/necessary/kind (2/3) are the tests we are supposed to be applying, right? https://slatestarcodex.com/2014/03/02/the-comment-policy-is-victorian-sufi-buddha-lite/
You might be right but you haven’t proven the LLM figures posted by Adrian wrong yet, and regardless of source that would be necessary to this argument, an argument that I otherwise don’t care about.
Which makes none of your comment true (or rather not yet proven), necessary or kind.
>You might be right but you haven’t proven the LLM figures posted by Adrian wrong yet, and regardless of source that would be necessary to this argument, an argument that I otherwise don’t care about.
We shouldn't have to prove a negative here. For the time being LLMs are simply not accurate enough.
Where LLMs are wrong, which is not infrequently at the moment, they're usually wrong in ways which are not easily apparent to people unfamiliar with the subject running a cursory search.
"Why was the Space Shuttle's external fuel tank not recovered" isn't a complex question which requires some unique insight only shared by five experts worldwide, two of which frequent ACX, nor does it require a Fermi estimate by the Bayesian gurus that upheld the standards in days long gone by.
Looks like "sometimes it's cheaper to throw something away rather than re-use it" is the answer, so far as I can find one.
Good discussion on a Reddit site about this question, we all got side-lined by "is ChatGPT answer good enough?" from the original question, which is "WHY did they not re-use the external tank?"
A combination of "they wanted to shave every pound of weight off" and "plans were there to use them to build a space station but never went anywhere as the adjustments would mean too expensive, too heavy, too much new equipment to make this possible", so it ended up "as light as we can manage and make it throw away to that end":
This is the very first time I haven't been disgusted by the idea of having upvotes on act, because I would upvote this comment and downvote its parent.
...no, they are perfectly capable of autocompleting a piece of text that begins with some combination of the words you typed in and whatever else the vendor chooses to prepend in a manner that results in a statistical match for text found on the internet.
This is not the same thing, because the internet is full of rubbish, and also because there is nothing in the process to distinguish between "here's the answer" and "here's a piece of text in the style that an answer would be written in, if you were given an answer". Your "numbers from Claude, so double-check them" disclaimer implies you are at least somewhat aware of this, and it would be disingenuous to now claim otherwise.
Hence people specifically wanting a response from a human: yes, humans can also be wrong, make things up and/or lie, but our well trained intuitions for how to detect that stuff at least have some small hope of matching the territory in this case; when an entirely alien mechanism is generating the text and also our mental model is demonstrably mistaken about what it is even doing in the first place, there is essentially none.
> > perfectly capable of answering simple questions
> ...no, they are perfectly capable of autocompleting a piece of text that begins with some combination of the words you typed in and whatever else the vendor chooses to prepend in a manner that results in a statistical match for text found on the internet.
Potayto, potahto. I used to think like you, until I started using LLMs in earnest. Sure, I'm still encountering hallucinations on a regular basis, but the "statistical parrot" mental model falls far, far short of their real capabilities.
> Sure, I'm still encountering hallucinations on a regular basis
Potato, potahto. Outside tech demos, when people ask questions they want actual answers and not hallucinations. It's amazing that the dog can sing, but it's not going to replace my CD collection.
I'm not convinced the "statistical parrot" model is wrong, rather I think that that's a good description for a lot of what people do. It's not a complete model of people and it's not what we mean by understanding, which is why LLMs are such a mixed bag.
To be clear, Ripple is actually a nuclear device designed by the Lawrence Radiation Laboratory, not a data analysis tool developed by Langley Research Center. ChatGPT just lies, all the fucking time, constantly, incessantly, I don't understand how seemingly smart people just try using it and trusting its results without any attempt at verification.
I don't think I've ever gotten a truly useful answer out of it, though I haven't tried in a while. The AI worship around here is really annoying and dare I say may blind some people to its limitations.
Ironically, I find the art generators vastly more impressive than the LLMs, despite the former getting far more hate. Of course that may be why.
I don't even think it's necessarily useless for data-gathering or analysis, but when it's used in such a way the results MUST be verified because it WILL just lie.
I think that ChatGPT is useful in finding specialized nomenclature. E.g., if one is looking for a named law or model or theorem, and one can describe what the law/model/theorem is about in layman's language, the LLM can be useful in finding the name of the thing.
On the other end, if one wants to survey possibilities and select them according to some measure, e.g. 20 lowest-boiling inorganic gases, good luck, unless some human has already compiled such a list - even if every candidate is already documented in Wikipedia, in the LLM's training set.
( And I've been steering clear of politically controversial questions, where the RLHF Woke indoctrination is likely to obscure what the _capabilities_ of the technology really are. )
Whatever Google embedded in its search is pretty awful. I just tried
> What is an example of a molecule with an S4 rotation reflection axis but no mirror planes and no center of inversion?
It replied with
>A classic example of a molecule with an S4 rotation reflection axis but no mirror planes or center of inversion is methane (CH4); its tetrahedral geometry allows for three S4 axes, making it a prime example of this symmetry element without additional symmetry features like mirror planes or a center of inversion.
which is just wrong. Methane has 6 mirror planes. In fact, _this_ LLM "knows" this. If I ask it
>How many mirror planes does methane have?
I get:
>Methane has 6 mirror planes.
>Explanation: Since methane has a tetrahedral geometry, you can create a mirror plane by selecting any pair of hydrogen atoms and passing a plane through them and the central carbon atom. This gives you 6 possible mirror planes.
"In 1962 the United States conducted its final atmospheric nuclear test series, Operation Dominic. The devices tested were designed and built by the Los Alamos Scientific Laboratory (LASL) and the Lawrence Radiation Laboratory (LRL). During the test series, LRL conducted four tests of a radically new design called the Ripple concept. Tests of the Ripple concept demonstrated performance characteristics that eclipse those of all nuclear weapons designed before or since. For numerous reasons discussed in the article, the Ripple concept was not pursued, but the technology it pioneered has been in continual development—for peaceful purposes—to this day. Until now, very little has been known about these tests and the concept behind them. This article, the result of a multiyear investigation, sheds light on the Ripple program for the first time, allowing for a largely complete account. Included are the origins of the concept and its designer, the technical characteristics, the significant role played by the geopolitical context, the test series in detail, and the cancellation and legacy of the program."
So I'm going with Chastity here since she knew what she meant in the first place and the ChatGPT did not suggest it as one possible answer, and did get the LRC and LRL confused when replying to her.
I just noticed - LRL does not correspond to Langley Research Center (LRC). So yeah, the AI is too stupid to work out that "L" and "C" are different, it's just regurgitating something from its training data.
If you google household ingredients for washing a floor, you get a bunch of hits for vinegar, vinegar & dishsoap, and vinegar and baking soda combined. The last of these is nonsense, because the 2 active ingredients cancel each other out. I asked GPT4 for ingredients a few months ago and it gave me vinegar and dishsoap. I asked whether adding baking soda would help, and it agreed heartily: "Adding baking soda to your cleaning mixture can enhance its effectiveness, especially for tackling tough stains and odors on linoleum floors."
Jeffrey Soreff, a chemist who posts here frequently, has posted many wrong answers it has gotten from GPT4 for chemistry questions that are easy to look up the answer to. Recently he posted that it doesn't understand what a tetrahedron is -- can't make an image even when he explains that it's a pyramid with a triangular base.
GPT4 often does no more than compile the most frequent google hits, but then it packages them so that they sound authoritative. I don't think either a superficial google search or a chatbot query is adequate for questions like OP's. You have to google for answers and then you poke around and check the one you think is probably accurate. If you don't know how to poke around and check that particular question then you just don't know for sure what the answer is.
It's also kind of rude to chatbot an answer to somebody's question. With the same amount of typing the person could have asked a chatbot this question instead of you. Obviously they are looking for a different source of information.
"rude"? What is "rude" about it? Did I insult anyone? Some people do seem to be offended, though…
I openly stated my source. Feel free to ignore such comments.
Edit: I am actually quite surprised about the general reaction to my lighthearted comment. Admit it – "double-check them before building your own reusable launch system" is at least worthy of a smirk, no?
I notice you don't respond to my main point, examples of inaccuracy. Anyhow, about the rudeness: It's sort of like answering somebody's question by sending them to this: https://letmegooglethat.com
Using LLM output in online discussions or forums can come across as impolite for a few reasons, especially if it’s clear that the response isn’t a personal one:
Lack of Authentic Engagement: Posting a generated response might make it seem like the person didn’t genuinely engage with the question or community. People generally appreciate thoughtful replies that show understanding and connection with the original question or topic.
Unfiltered or Imprecise Information: Sometimes, LLMs might generate responses that are too generic, overly detailed, or miss subtle context cues that a real person would catch. This can make the response feel like an awkward fit for the conversation and might even be misleading if not carefully reviewed.
Lack of Personal Touch or Effort: Communities often value responses that show effort, nuance, or personal insight. Posting LLM responses can seem dismissive, as though the question wasn’t worth the time to answer individually.
Potential for Misinformation: If people recognize a response as AI-generated, they may also distrust its accuracy. Unless the response is verified, it might not meet the standards of a community that values reliable, accurate information.
Risk of Redundancy or Dullness: LLM responses may sound “robotic” or repeat information already available in standard sources, lacking the freshness or original thinking that people often look for in online discussions.
When using AI-generated answers, giving credit or adding a personal summary can help avoid these pitfalls and maintain the quality of engagement.
Actually, it's not nonsense, I've heard of baking soda + vinegar. You apply baking soda to the grease on the floor, then add vinegar and mop it up. I don't know how well it works, because I've never tried it, but it's not implausible. I think the idea is that this makes the grease lift off, but I'm not sure. Or maybe it's just something that someone tried, and it worked for them.
> Would you have felt better if I had posted numbers from a superficial Google search? Or from Wikipedia?
Yes. Because those would have contained context and metadata and citations which could be further checked and traced back, and terminate in a NASA PDF or something. Even if they had contained literally the same ex cathedra statement word for word as ChatGPT, you would be no worse off in trying to factcheck it, and the *lack* of all that would have told you something useful: that it is a low-quality source of dubious veracity that may well be wrong. Meanwhile, some LLM obiter dicta kills all curiosity and is the junk food of writing: fattening webpages while providing no nutrition.
Also, "results from a Google search" should, ideally, not be credited to "I Googled and found this", but "[this site] says...", because the fact that you found it from Google doesn't tell you a whole lot about its reliability. (Google would like the fact that they brought it up to mean something, though.)
Agreed. I tend to include urls with information I find, so that people reading the comment can see exactly where I found the information (and, usually, what organization it is associated with).
On a related note - even a very superficial Google search is often improved by including the name of a plausibly authoritative organization in the search terms. ( Bluntly, I got a bit sick of the back-and-forth on the shuttle H2/O2 tank meta level questions above, so I did a cursory Google search - but including _NASA_ in the search, and then commented, quoting from the NASA site about the shuttle and citing the URL. )
For anyone that did want to know the numbers, it looks like the above is broadly right. I've done a brief Google but haven't dug especially deeply (though I see a bunch of sites that seem to agree).
The fuel tank was jettisoned after main engine cutoff (MECO) but prior to orbit (https://en.wikipedia.org/wiki/Space_Shuttle_external_tank) - the shuttle then used its online maneuvering system (OMS) engines to get thr rest of the way to orbit.
Speed at MECO was 17,000 mph (https://pages.cs.wisc.edu/~yat/space/facts.htm) which is 27.4 km/h. 17m seems pretty rounded but about the right number, and I can't find any other numbers out there. It's pretty close to the 28k that Claude gave.
This is broadly right; a massive, ongoing complaint in the 80s and 90s from the folks who would broadly be SpaceX employees today was that we constantly threw away the ET when it was almost at orbital velocity and that we should have found a way to push it to orbit and use it for space stations.
If you are going to post the output of some system of norms and values, please provide evidence that this set of norms and values has been backtested across millennia of human culture and indeed promotes human thriving. We can all judge others easily based upon our own standards as the absolute correct stance. We also don’t need more imaginary moral systems floating around for impressionable algorithms to find and become the foundation of future moral myths.
I think this is unreasonably strict. From my experience I'd estimate that LLM provided figures are no less accurate than cursory Google searches. I'd be very surprised if Claude gave figures outside reasonable confidence intervals for questions like this more than 5% of the time.
I think either people heavily exaggerate hallucination rates on cutting edge models because of bias, or otherwise I'm very very curious to see what kind of tortured queries they're giving to get such inaccurate results.
LLMs are unlikely to give figures less accurate than a cursory google search, the step where the error is more likely to enter in here is when you ask the LLM to explain something, and it gives an answer which it justifies based on the presumed relevance of those figures. The presumption in accepting the LLM's answer is that it's more likely than an uninformed person to be generating a correct answer, for which those figures are an appropriate explanation. In areas where the average person doesn't have enough domain knowledge to generate the right answer with some cursory googling, LLMs are wrong quite a lot, but this also makes their inaccuracy hard for the average person to check. It's easiest to check on straightforward factual matters which you're familiar with, but you know the average person is not.
>It's easiest to check on straightforward factual matters which you're familiar with, but you know the average person is not.
For example, I recently asked ChatGPT 4o "Which Valar took part in the War of Wrath?"
The actual answer is that the published Silmarillion doesn't explicitly name any Valar as doing more than agreeing to the expedition, uses language that's ambiguous but can be (and often is) read to imply that some or all of the Valar are directly involved (referring to "the Host of the Valar" and "the Might of the Valar" doing various things in the war), and includes details that are usually read as implying that the Valar didn't accompany the expedition (namely, the Maia Eönwë commanding the army rather than Manwë, Oromë, or Tulkas, and after the final battle Eönwë ordered Sauron to return to Valinor for judgement by Manwë as he felt he lacked the authority to judge a fellow Maia).
ChatGPT answers this question okay. It glosses over the ambiguity of the text, but the overall framework isn't badly wrong, and it offers up some mostly-plausible speculation on how four of the Valar might have been involved. It does mention some stuff that Tulkas did in the Book of Lost Tales (the earliest version of the story), but doesn't seem to notice that that was a BoLT-only part of the story.
My follow-up question, "Did the involvement of the Valar differ in different versions of the story?", intended to tease out the problems in the bit of the answer about Tulkas, resulted in some pretty bad hallucinations. For example, it says that in the Book of Lost Tales, "Tulkas, Manwë, and others were imagined as physically fighting in the War of Wrath." Tulkas did explicitly take part in the War of Wrath in BoLT, but Manwë and the rest of the Valar emphatically did not. Manwë actively opposed the expedition in BoLT and Tulkas, most of the Elves of Valinor, and many of the "Children of the Valar" (i.e. Maiar) defied him and went anyway. ChatGPT also badly overstates Tulkas's involvement in later versions of the story (where he actually isn't mentioned at all) and brings up some stuff that I'm pretty sure is hallucination about Ulmo being explicitly involved in some versions.
I recently posted an example of such an encounter with an LLM (I always ask Google Gemini, because I don't want to sign up for an account, but I already have a Google account).
I asked a question taking the form "here is a couplet from a broadsheet ballad - what does the singer mean by these lines?", and noted in an earlier thread that the answer I received was abysmally bad.
But, of note, I got a response in that other thread saying that I shouldn't be calling that a bad answer because it looks like a good answer if you're unfamiliar with the facts.
It's still not clear to me why that should make the answer better.
I'd say it makes the answer worse! Because if it's *obviously* wrong, you're going to catch that and not propagate it, but if it looks plausibly right, you might be fooled into thinking that it's trustworthy unless proven otherwise.
Along with other types of nerdery more commonly represented on this blog, I'm also a martial arts nerd, and I've spent a fair amount of time asking ChatGPT questions about martial arts. My takeaway is that ChatGPT is quite familiar with the sorts of names people tend to mention in association with martial arts, the sorts of adjectives people use and which styles are most frequently mentioned, but its accuracy in actually answering even basic and straightforward questions related to the martial arts is much worse than even cursory googling. But to someone who doesn't actually know anything about the subject in question, it sounds perfectly credible,
tldr - in the first case ChatGPT o1 wound up getting the explanation for the color of CuCl4 2- badly wrong, and I had to lead it by the nose to force it to finally cough up the right answer ( detailed transcript of the session at https://chatgpt.com/share/671f016f-3d64-8006-8bf5-3c2bba4ecedc )
in the second case whatever Google is embedding in its searches (Gemini???) falsely claimed that methane has no planes of mirror symmetry (in the course of giving methane as an incorrect answer to my original query)
Just to be clear: I _WANT_ AI to succeed. I would very much like to have a nice quiet chat with a real-life HAL9000 equivalent before I die. It is probably the last transformational technology that I have a shot at living to see. But it is _not_ reliable (nor at AGI) yet.
If I ask an LLM something that is easy to Google, it is likely to give a response that is close to that answer. I seldom ask an LLM for such things, because I usually try searching first based on likely keywords (I often want to go deeper so I need useful further links, not a tepid summary, and this saves time). I probably have a higher prior on incorrect hallucinations than someone who goes to ChatGPT first.
LLMs are currently more likely to be misleading...but perhaps not by a huge margin. Most of the answers I get to web searches are quite wrong, and usually obviously so. (Most of them are so wrong they're irrelevant.) But I ignore the (blatantly) wrong search responses. LLMs tend to give one answer, and when it's wrong, it often isn't obviously wrong.
He reported that they *were* from a Chatbot (and which one) which is the important part. He gave his source. Most web searches don't yield a verifiable source either, and some of them return invented answers. (Not being invented by an LLM doesn't mean they weren't just invented.)
>After the solid rockets are jettisoned, the main engines provide thrust which accelerates the Shuttle from 4,828 kilometers per hour (3,000 mph) to over 27,358 kilometers per hour (17,000 mph) in just six minutes to reach orbit. They create a combined maximum thrust of more than 1.2 million pounds.
The Space Shuttle was a terribly suboptimal design hobbled by political compromises. It is a wonder it looked and worked as well as it did. NASA was already ossifying into a terrible bureaucracy, slowly losing its skills and spirit from the glory days of Apollo. Adding fuel tank recovery and refurbishment would have added years and billions of dollars to the schedule and budget, it was not even seriously considered. Even for the modern SpaceX catching the booster is pretty audacious, and it was maybe one second from a failure, according to the Musk's accidental Diablo sound overlay.
Eh, there were politics involved yo be sure, but the real hobble on the space shuttle was the Air Force's 15 x 60 ft, 65 k lb LEO, 40k lb polar payload requirement with full launch site return capability being maintained. While it is impossible to portray the mathematics of all of this in any sort of short post, the bottom line is this: the primary design goal of the space shuttle was to enable the USAF to throw large and heavy militarily relevant payloads into militarily relevant orbits at a high launch cadence. This is a very challenging design goal, which came with the attendant high costs. Unfortunately, these high costs had to be paid for every mission, even civilian scientific ones that could have accomplished their goals with a far less capable launch platform.
The Space Shuttle was and still is a technological marvel, but with a price tag to boot. Forcing the civilian portion of its users (who ended up being by far the majority use case) to bear the burden of the exceptional costs for corner-of-the-envelope military use is the grand tragedy of the program. There should have been a much cheaper civilian version that would probably still be flyable today.
Right, that is a good point. A better (and more expensive during the design stage) approach would have been having a configurable setup where, like with the SpaceX Falcon Heavy, the boosters could be recoverable unless the mission profile forces them being expended.
It depends what the downside of the Shuttle not having the specific military mission capabilities would have been, had the occasion to use them come up.
There are two parts of this:
1. Could a non-shuttle launch vehicle perform the mission?
2. How much of a luxury was the mission, i.e. what happens if we can't do it at all?
For 1, I understand the answer was mostly yes. Launching large spy satellites (which I understand to be the main driver for payload size and polar orbit capabilities) wound up mostly being done by disposable boosters (Delta and Titan, IIRC) anyway.
The main leftover mission I'm aware of that other launch systems couldn't do was to snatch a Soviet satellite out of orbit and return to Vandenberg. I am not familiar with the thinking that this would be a message or desirable thing to be able to do, so I will tentatively classify it as a luxury mission.
It's a tragedy in the many proposed expeditions and payloads that may have advanced our scientific understanding were never allowed to happen, because launch costs consumed so much of always-finite research budget.
Take the recent Europa Clipper mission- it was originally required by congressional mandate to fly on the SLS, which according to the NASA OIG would have cost a minimum of 2.5 billion USD, on a program that costs 5.2 billion overall (so essentially increasing total expenditures by 50%). Once someone actually did some accounting, congress relented and allowed it to launch a few days ago on a Falcon 9 Heavy for a mere $178 million. It will take longer to reach Europa, but this is a savings of $2.3 billion at a minimum, which can presumably be put to some better uses.
Alas, Starship can't get past Low Earth Orbit, nobody has an in-space maneuvering stage that can fit inside a Starship and take a Clipper to Europa, and neither of those things is going to change in two years even if you tell the engineers to get started today.
How many enormous projects are ruined by requirements which are decided in advance, which turn out to be unachievable, but which then can't be changed later on once we learn more? It seems like the answer is "most of them".
If Starship had stuck with its original specs, it wouldn't have worked -- they needed to try a few things and figure out what was practical and what was not. On the other hand, Elon doesn't have a flawless record here either, and the Cybertruck suffers from similar problems where it's a worse vehicle than it would have been if they hadn't made certain dumb commitments at the planning stage.
In addition to what others are saying, the big orange fuel tank was actually the cheapest part of the Shuttle, by far. It had no engines, so it was basically just expensive pipes, tanks, and insulation.
In order to make that reusable not only would significant weight be added, it would also make it more expensive. The savings in reusability would have been more than cancelled out by the lost payload and refurbishment costs.
As I recall (from reading a fantastic book on the history of the Challenger disaster which I recommend here without any reservations https://www.amazon.com/Challenger-Story-Heroism-Disaster-Space/dp/198217661X); the original plan was to have two part launch system, where the shuttle is first flown on a carrier to a suitably high altitude and then launched from there for whatever it's mission was.
Both parts were envisioned to be re-useable but the cost was well, astronomical.
In this case, the chatbot got it right - the external fuel tank carries all the propellant the shuttle's main engines will use taking the shuttle all the way to orbit (well, except for a small circularization burn with the maneuvering thrusters). So the tank can't be discarded until the Shuttle is at orbital velocity, roughly 8 km/s. At that point, there's no question of it coming back to the launch site or parachuting into the ocean anywhere near the launch site; it's going to come down halfway around the planet.
And it's going to be subject to the same sort of reentry heating environment as the Space Shuttle itself. A simple aluminum tank with just some spray-on insulation to keep the propellants chill before launch, is not going to survive that. A tank which could survive that, would probably weigh enough that the already-marginal Shuttle couldn't carry any actual payload (and certainly not the big military spysats that were part of the requirement).
The only remotely sensible proposal for reusing the Shuttle external tanks was to take them *all* the way to orbit, and then use them as pressurized habitat or propellant-storage elements on a large space station. A single external tank would have more interior volume than all the pressurized elements of the current ISS combined. But nobody had the budget to build a space station that big even if they got the pressure vessels delivered to orbit for free, and their orbits would have decayed long before NASA got around to using them, so they just ditched them in the ocean instead.
The space shuttle fuel tank was just a big tank, it didn't contain anything capital-intensive or fancy like advanced rocket engines. Those were on the Shuttle itself, and those were recovered.
Even recovering the SRBs didn't make sense, because they were only "reusable" in a marketing sense: the cost of fishing them out of the ocean and refurbishing them was greater than the cost of just manufacturing additional SRBs, but reusability was one of the justifications for the expense of the shuttle program so reusable that was deemed.
I mean, the simple answer as to why they couldn't recover the fuel tank was because the entire launch stack was designed around a set of premises, and one of those premises is that that tank was going to be jettisoned and break up on reentry instead of being recovered, and if they'd wanted to recover it that would have required a fundamentally different spacecraft than the one they designed. The Super Heavy Booster is an entire rocket, with engines and electronics and computers and cameras and radios and miles of wiring and sensors. The external tank was just a tank.
It never really got past power point engineering as far as I can tell, but ULA had a proposal for Vulcan that involved basically detaching the engine section and recovering only that - for basically that reason, the majority of the cost of the rocket is the engines and avionics, while the tanks (basically just big empty aluminum cans) are bulky, kind of delicate, and therefore hard to recover.
What would happen to our society if a large-scale, long-term blackout occurred? There is a high chance that it would get quite bad very quickly. Transportation and health services would likely cease to function within a few days, and many people would face food and water insecurity almost immediately. This highlights the urgent need for greater investment in preparedness, as there aren't even exercises to train those responsible for managing such crises. If you're interested in more details, I have written a new post in my living literature review that offers a deep dive into the consequences of blackouts: https://existentialcrunch.substack.com/p/the-consequences-of-blackouts
> What would happen to our society if a large-scale, long-term blackout occurred? There is a high chance that it would get quite bad very quickly.
I think you've independently reinvented the best argument for "prepping."
It's less for the Big One or Zombie Apocalypse and more for longer stretches without power and services brought on by severe weather and / or state inadequacy.
Also the strongest argument for your own solar + battery setup, re Scott's last post.
I like that you're optimistic and think something can be done collectively - I personally believe the only "real" solution is personal and independent, because you can actually prep and get your electricity off the grid through your own efforts without having to persuade a lot of other 'general public' people who don't believe in thinking ahead.
Yes having a backup for smaller catastrophes makes sense, but I generally think that for everything taking longer than ~ 2 weeks, is is important to be able to rely on the state. Otherwise, everything will turn quite bad.
Generally, I don't think long term prepping really works. I've been looking into global catastrophes and societal collapse a lot and my conclusion is that prepping for a duration longer than two weeks mainly buys you the privilege of dying a bit later than the rest.
I think that if the society you live is generally unprepared, most people would die. You might live longer if you are prepped, but I doubt it would be be a great life to have. But, we could plausibly make it through large catastrophes, if we take the time to prepare. And generally, I think we are going in this direction, albeit slowly. For example, more and more countries are starting to consider large scale catastrophes in their risk assessments.
Prepping will work fine as long as you prep properly, which nearly no-one does.
You don't have a bunch of water and food that will last N months, because as you point out, some armed gang will show up and take it from you, and all your years of prepping will be for naught.
If you don't have guns and tactical training, your prepping is missing its foundation.
This is a common fantasy but just seems like a way to make your eventual death more cinematic. I've seen Die Hard and Home Alone too, but I don't like my chances against an armed gang, because the defining aspect of gangs is that they outnumber me. I mean, *maybe* I can use my superior planning and knowledge of the terrain to my advantage and fight off dozens of dudes on my own, but that sounds like a fantasy rather than a plan.
If it comes down to a world where we're having gunfights over the world's remaining food resources then I'm probably dead. But there's a bunch of far more likely scenarios where having a bunch of sensible preps will turn a horrible situation into a much more comfortable one, even if the people without preps aren't dying, they're just spending their days standing in line waiting for supplies.
I'm not a real prepper, but I think part of prepping is also training. No matter how much food you have stored, you'll run out eventually. Do you have the skills to obtain more, whether by growing, finding, hunting, or something?
Same goes for everything else: heating, cooling, washing, repair work, etc.
In Gaza, the optimal approach to prepping would probably involve finding out a lot of intelligence about Hamas and then offering it to Israel in exchange for passage to elsewhere.
At some point, you're prepping for the collapse of civilization and that's beyond your resources. But a couple weeks without power is not the end of civilization, it could happen, and having some notion of how you'll heat your home/charge your phone/cook your meals is probably smart.
A comment in support: a few years ago, we lost power for 11 days after an ice storm. Now, the greater area we were in didn't lose power for nearly that long, but because we live in a rural area with a lot of trees, there were hundreds of line breaks and it took a long time to fix them.
Depends. I can see solar panels and batteries, and not doing much at night for a couple of weeks. It wouldn't even be that bad. You should have a bit of food storage, but that should be easily doable in a rural environment (i.e. space wouldn't be a problem).
What do preppers do about water? Seems like that's the biggest barrier if there's a long term catastrophe, unless you live but a river, and it's hard to store a month's worth of water like you can with food
I don't know the specifics, but you probably want books on project management. In the absence of a structure large enough to include project managers, you will have to wear a bit of that hat, which includes negotiation, estimation, and presenting a case to a decision-maker.
If you remember it fondly then it probably does have value. But had you seen or heard similar stories at that age before you saw it? The context matters. If the first time someone is confronted with death and cruelty is watching the 1942 Bambi, it will seem a monstrous horror. But most kids who see it have already seen birds killed by cats, heard parents explaining where bacon comes from, or headshotted opponents in a tactical FPS, so Bambi is just a little sad, not a searing tragedy.
I agree that context matters. Protecting children by taking away the opportunity to watch a sad movie makes it even harder to watch the next sad movie. Before you know it you have university students asking for trigger warnings.
I saw Bambi when I was five and I cried. I still cry at movies but I know that emotions over fiction pass quickly. I can learn from them and enjoy them even while experiencing strong emotions. Better to learn early, I think.
I believe children should be given dangerous situations with adult supervision actively.
Have a 3 year old stir a boiling pot while explicitly telling them where to put their hands for example, while you wash dishes, slowly one at a time.
Emotions? Really? Yes bad emotions are a risk with good art but life won't be so kind when you look away completely tho; but you should be taking risks far more then even that simple example
Right, their first year your job is to keep them safe. After that, your job is to make them dangerous.
3 year old and a boiling pot, really? I was going to fully agree with your opening sentence but I don't think you have to go *that* hard. Great if it worked out for you and all, but wow, that's really young.
If it was my kid Id would've taught him how to use some kind of small knife while cooking(buttered noodles get old and his salting was usually overly done), I think he 5 now; I think he's the smartest of my siblings children at that age if the close to the most stubborn (which is many, mormon family sizes). If you didnt keep those hands busy he could very easily leave the house in between any short lapse of attention and my sister needed someone to tire him out with a new born.
Children want to help and be taught, until its crushed out of them.
There's definitely something weird about memories of movies watched as a child. I find (and others I've spoken to agree) that when you rewatch a movie you'll be amazed to find whole sections, whole themes, whole subplots, or whole really obvious stylistic aspects that you have no memory of. This holds even for movies I watched *dozens of times* as a child!
It's really quite bizarre. It makes me wonder if I'd watched an edited version. But no, I didn't, I just tuned things out apparently, and most children do. Just one example: a lot of 90s kids movies are full of slapstick, throughout the movie. I hate slapstick, and yet I remember liking these movies and had no memory of the sheer amount of it. I must have just tuned out the parts I didn't like.
The same applies to bad dialogue and bad acting, though part of that's lack of knowledge of what's bad and good, but not all of it.
As another parent, I don't know anything else about the movie, but I'd strongly consider showing it to him. It's important for your kids to get in contact with the full range of human emotions. In general imo most parents are on the far end of safetyism and children can handle much more much earlier than they think if you actually teach and help them along the way.
Funnily enough just yesterday I've had a related discussion with my wife; Our daughter (3) watched a children's tv show about a family of dinosaurs we haven't seen before, and then we notice that several episodes go like this: 1) the protagonists meet other dinosaurs which are just as intelligent as them 2) they get into conflict 3) the protagonists win the conflict and ... eat the opposition.
At first my wife was shocked, but after a short talk we both agreed that it might be a good counterweight. We had been complaining about some other tv shows she had seen before which often portrayed even dangerous predators as actually nice deep down and that all animals can get along great through the power of friendship. We know even quite a few college-educated adults who have completely unrealistic ideas about animals (along the lines of "bears would never attack humans unless unnecessarily provoked, anything else is animal hating propaganda") and it often strongly biases their politics in a bad way.
Obviously you still need to talk with the kid and contextualize what they see, but if you do they can handle it just fine.
I agree about kids getting in touch with the full range of emotion, but most are quite able to do that without seeing, when quite small, a movie with a tragic death in it. Think about real kids. They cry far harder and oftener than adults do, and while some crying is set off by anger or frustration, lots is set off by grief. Kids grieve when they lose a favorite toy, when they suddenly start missing their parents when at school or with a babysitter, when their feelings are hurt during play -- and also when they see sad things happen. I can remember a few times when I was small and cried in sympathy when some other kid fell down. I saw my daughter do that too when she was small. And while the things kids cry with grief over seem small to adults, they seem huge to the kids. Don't you remember being a kid, how it felt?
My introduction to death was when I was about six, being brought by my mother with her while she went to visit the deadhouse (as the hospital mortuary was referred to), where the body of a neighbour was laid out before being coffined . Everybody knelt and said a prayer, then afterwards there was the funeral and burial, and I saw the coffin being put into the grave and the grave filled in.
I did have a couple of dreams about death and being buried myself after that, but I put it together that hey, when that happens, I'll be dead so I won't know or care.
So I find it hard to think that a movie could be very traumatic as an introduction to loss, when the Kangaroo is hopping away in good health and still alive, just not going to be around Dot any more 😃
Absolutely, though, I wouldn't let six year olds watch horror movies or anything with explicit violence.
I think precisely because it's a smaller problem, it's more likely to haunt a child. Kids often have separation anxiety, and don't commonly have a self-preservation instinct (or any other sense of their imminent mortality). We had a dog die this summer and then got a new one; the conversation with my five-year-old about dog cremation was morbidly entertaining, but having the new dog run away from me on a walk and go missing for fifteen minutes clearly made more of an impression.
This should not be misconstrued as an argument against watching the movie; if nothing else, practice separation is presumably actively helpful in handling real separation anxiety.
This matches my experience
My intuition is that certain kinds of media and themes will shock and/or distress kids the first time they see them, regardless of the age they are when they're first exposed to those themes. I've known people who were exposed to horror movies when they were six and people whose parents sheltered them until they were sixteen, and it seems like they had roughly the same "trauma" response to their first horror movies. While it might be harder for a six year old to dismiss the "trauma" of their first horror movie...I dunno. Teenagers are often extremely good at leaning into and even enhancing their own "trauma" with rationalizations for why scary stuff might be real. I know I managed to be as scared about alien abduction at 14 years old as I was of monsters in the closet at six.
You weren't asking about horror movies, but probably exposure to the concept of loss is similar. It's going to hurt regardless of the age your kid is when they first experience it as a theme. Five seems like a reasonable age to be exposed to the concept of loss and why it makes people sad, especially if it's handled in a beautiful way. And while I'm not a parent, my intuition is that it's better to have initial exposure to the themes of loss via media rather than a sudden shock of it in real life (the sudden death of a pet or grandparent, etc).
Edit to add: Be prepared for your kid to be heartlessly disinterested in your beloved Dot and the Kangaroo. He might be unforgiving of the rough animation, slow pace, etc after being trained on 2020s modern media (presuming you've allowed them to see any).
He may not cry, so go ahead. It's a way of introducing children to the idea of parting and ending of things, and at least Kangaroo isn't dead, she's just leaving to let Dot return to her human life (and it's open ended to the possibility that they might meet again later).
Sometimes people leave (they move away, they die) but while that's sad, it's not a bad thing and you go on with your life.
I think it's important to introduce this stuff to kids early. Tragedy sticks with you, and I look back on the tragic - even borderline traumatic - stories of my childhood as the most enriching.
I was not ready for Bambi when I was 5. I was ready to learn about death, but the movie presented it in the most traumatic form imaginable: the death by violence of a mommy. After seeing Bambi I understood the reality of death better, but it really left a huge dent in my sense of wellbeing. I tormented for years by stories that formed in my head about little animals left in the nest grieving, terrified and starving to death because their mothers did not come back. I I think you should err on the side of caution with kids about matters like that.
When I was college age I taught nursery school for a while part time, and when the school guinea pig died we showed kids her body the next day, and answered their questions, and let them examine her body or pet her (and then wash their hands really well). We also told parents about the guinea pig's death and how we'd talked with the kids about it. I think that was a decent introduction to death for the kids.
Movies hit kids differently I think. My son helped us bury his grandparents dog, and I think that was a positive experience. I would be reluctant to show him Bambi though.
I remember being very captivated by Bambi's father, though, and how he sort of revealed himself to Bambi (and to the child viewer) - which I don't think would have happened without the death of his mother. I remember also that frightful word - "Man!" Which implicit lesson re nature and loss has only grown more true as time passes.
Once I was sitting next to one of the local springs with a den of Cub Scouts among others, I think it was, listening to a park staffer give a little talk about the "spirit" of the springs, a variety of salamander, and she asked the assembled group if they knew what the salamander's chief predator or threat was.
We all sat awkwardly unable to answer for a few moments.
Then a kid piped up bravely, and with something of that Bambi drama: "Man?" And I think we all, adults and children, thought to ourselves, yeah, that tracks.
"Uh, good guess? Actually, it's crawfish", she said.
Dumbo seems like a good precursor to Bambi. It's wrenching but Mother doesn't die.
Bad emotions are instructive, not traumatizing. Trauma is traumatizing. Movies don't cause trauma.
Eremolalos's experience above suggests otherwise. And there are plenty of movies not made for kids that would absolutely traumatize a kid.
But I don't think this would traumatize him. I'm just not sure if it would be net positive or harmful.
I think you're right to ask (and right that there are absolutely movie experiences that can traumatize young kids, who sometimes don't know or can't fully process that they aren't witnessing real events), and I don't think it's a question that can be settled on the basis of some principle. It's not actually "should we shelter kids from difficult feelings or toughen them up to real life?", it's "is THIS child ready to have a salutary, if sad, experience with THIS movie?" You know your kid and his sensitivity level--does he remind you of you at that age? Does he tend to take things in stride or does he have intense feelings sometimes that don't make sense to you as an adult, does he perseverate or worry to the extreme about things related to loss?
Mine used to have very intense feelings about lost objects, which he tended to personify. It wasn't "I'm sad I don't have this thing anymore" so much as "this thing will not be OK without me to take care of it." (He also once in awhile had a panic meltdown for incomprehensible reasons, eg that a toy was lying at the bottom of a wading pool. Well that one was somewhat comprehensible, it clearly held symbolism for him.) He was an incredible packrat because getting rid of possessions felt to him like, maybe, dumping a pet by the side of the road--you don't do that just b/c it's old and not fun anymore, and in that same spirit of care we had to keep old toys, papers he had scribbled on... and oh my Lord, we left behind a rotting stick at the creek once whose tip was shaped a little like a horse's head and he brought it up for 2 years whenever he couldn't sleep. He's outgrown this completely now at 11, thankfully. All that to say, these things seemed to be proxies for him for a deep aversion to the idea that irreversible loss and sorrow exist. I literally went back and tried to find that damn horse-head stick because he could not. stop. thinking about it. (And believe me I was trying to ease him along into accepting that sometimes things are just gone. And eventually he did.)
So... I probably wouldn't have shown him the kangaroo movie at that age. (I screwed up on a few movies. I wanted him to love The Iron Giant but showed it to him too early and he thought it was sad & scary.) But when I saw him shift over to being less sensitive, which might have been around 7, then I probably would. It's not an either-or question, it's a question of when.
But if none of this rings a bell at all, if it all sounds so unlike your kid that mine just might be a space alien, maybe you should just go ahead! I do think a lot of kids could handle themes like this at 5--or be sad but in a way they can feel is helping them, as maybe you were.
Your story about your son is a good example of why you don't have to deliberately introduce most kids to tragedy and loss. The little losses of their lives feel huge to them. They are very emotionally alive.
It was an old favorite from when you were five, so you know for a fact that at least one kid can handle it. The question to ask is whether you think your son is meaningfully different from you in his ability to handle sad stories. In general, I agree with the majority here that it's good for kids to encounter difficult emotions in fiction. And if it becomes too much, you can always pause the movie to talk about the movie and give him a chance to decide whether he continues or not.
It was an old favorite from when you were five, so you know for a fact that at least one kid can handle it. The question to ask is whether you think your son is meaningfully different from you in his ability to handle sad stories. In general, I agree with the majority here that it's good for kids to encounter difficult emotions in fiction. And if it becomes too much, you can always pause the movie to talk about the movie and give him a chance to decide whether he continues or not.
Yeah I think they need the opportunity to experience and rehearse different kinds of emotions in a safe manner while they're developing. If they have a strong reaction then have a discussion with them afterwards to help them process and contextualize the feelings, but I don't think that shielding them from children's movies is going to help much in the long run.
One thing I think is pretty true is that not only *can* most humans have all of the standard suite of human emotions, but that we *will* have them with some regularity, because the brain doesn't like to lets parts of itself just atrophy inactive forever. if you don't have an y appropriate targets for an emotion in your experiences, you will attach that emotion to *something* going on in your life, in a way that may be less appropriate and more damaging than just having an actual correct target.
For negative emotions, movies are probably a good target because they provide accurate contexts to attach those emotions to, while having those events not be something in your own life that you have to constantly fear or obsess over.
I've watched several videos of the SpaceX Super Heavy Booster going straight back to the launchpad, which is one of the coolest things I've seen currently happening in the space program. A question I've never seen answered: Why could they never recover the space shuttle fuel tank like they could with the rocket boosters? It seems like a huge piece of equipment to throw away and replace every single time.
I've wondered that too.
I always assumed it because it stayed attached for much longer it burned up after being ejected. But that's just a guess.
The SRBs (solid rocket boosters) were jettisoned at a speed of roughly 4,800 km/h, while the ET (external tank) was jettisoned at over 28,000 km/h – close to orbit – so reentry was much more violent.
(numbers from Claude, so double-check them before building your own reusable launch system)
If you are going to post LLM output to make factual claims, please do us the courtesy of performing the verification yourself, or otherwise leave out the supposed details. We can all type a prompt into a chatbot. We also don't need more imaginary numbers floating about for search engines to find and become the foundation for future myths.
> If you are going to post LLM output to make factual claims, please do us the courtesy of performing the verification yourself, or otherwise leave out the supposed details.
Would you have felt better if I had posted numbers from a superficial Google search? Or from Wikipedia? How thorough and well-sourced would my verification have to be according to your standards?
> We can all type a prompt into a chatbot.
Then why doesn't everyone? State of the art LLM chat bots are perfectly capable of answering simple questions such as the above, and in great detail – enough details to enable further, independent research and verification, if desired.
People are asking questions here and hoping that someone has the motivation to research a real answer (or has expertise to share). It used to be that such questions were accompanied by "and a cursory search came up with these links which leave me confused" or a Fermi estimate, and it would be nice to return to such standards. Adding unverified LLM numbers as answers doesn't help, nor would "my random friend said". True/necessary/kind (2/3) are the tests we are supposed to be applying, right? https://slatestarcodex.com/2014/03/02/the-comment-policy-is-victorian-sufi-buddha-lite/
You might be right but you haven’t proven the LLM figures posted by Adrian wrong yet, and regardless of source that would be necessary to this argument, an argument that I otherwise don’t care about.
Which makes none of your comment true (or rather not yet proven), necessary or kind.
>You might be right but you haven’t proven the LLM figures posted by Adrian wrong yet, and regardless of source that would be necessary to this argument, an argument that I otherwise don’t care about.
We shouldn't have to prove a negative here. For the time being LLMs are simply not accurate enough.
Where LLMs are wrong, which is not infrequently at the moment, they're usually wrong in ways which are not easily apparent to people unfamiliar with the subject running a cursory search.
"Why was the Space Shuttle's external fuel tank not recovered" isn't a complex question which requires some unique insight only shared by five experts worldwide, two of which frequent ACX, nor does it require a Fermi estimate by the Bayesian gurus that upheld the standards in days long gone by.
Looks like "sometimes it's cheaper to throw something away rather than re-use it" is the answer, so far as I can find one.
Good discussion on a Reddit site about this question, we all got side-lined by "is ChatGPT answer good enough?" from the original question, which is "WHY did they not re-use the external tank?"
A combination of "they wanted to shave every pound of weight off" and "plans were there to use them to build a space station but never went anywhere as the adjustments would mean too expensive, too heavy, too much new equipment to make this possible", so it ended up "as light as we can manage and make it throw away to that end":
https://www.reddit.com/r/space/comments/1k4g1o/the_external_tank_from_the_space_shuttle/
This is the very first time I haven't been disgusted by the idea of having upvotes on act, because I would upvote this comment and downvote its parent.
I can see upvotes.
The comment you replied to has one upvote, and its parent has nine.
Yours will have one after I upvote it.
> perfectly capable of answering simple questions
...no, they are perfectly capable of autocompleting a piece of text that begins with some combination of the words you typed in and whatever else the vendor chooses to prepend in a manner that results in a statistical match for text found on the internet.
This is not the same thing, because the internet is full of rubbish, and also because there is nothing in the process to distinguish between "here's the answer" and "here's a piece of text in the style that an answer would be written in, if you were given an answer". Your "numbers from Claude, so double-check them" disclaimer implies you are at least somewhat aware of this, and it would be disingenuous to now claim otherwise.
Hence people specifically wanting a response from a human: yes, humans can also be wrong, make things up and/or lie, but our well trained intuitions for how to detect that stuff at least have some small hope of matching the territory in this case; when an entirely alien mechanism is generating the text and also our mental model is demonstrably mistaken about what it is even doing in the first place, there is essentially none.
> > perfectly capable of answering simple questions
> ...no, they are perfectly capable of autocompleting a piece of text that begins with some combination of the words you typed in and whatever else the vendor chooses to prepend in a manner that results in a statistical match for text found on the internet.
Potayto, potahto. I used to think like you, until I started using LLMs in earnest. Sure, I'm still encountering hallucinations on a regular basis, but the "statistical parrot" mental model falls far, far short of their real capabilities.
> Sure, I'm still encountering hallucinations on a regular basis
Potato, potahto. Outside tech demos, when people ask questions they want actual answers and not hallucinations. It's amazing that the dog can sing, but it's not going to replace my CD collection.
I'm not convinced the "statistical parrot" model is wrong, rather I think that that's a good description for a lot of what people do. It's not a complete model of people and it's not what we mean by understanding, which is why LLMs are such a mixed bag.
> Would you have felt better if I had posted numbers from a superficial Google search? Or from Wikipedia?
Yes. Obviously. Here's the result when you ask it about ripple: https://i.imgur.com/QapRDIp.png
To be clear, Ripple is actually a nuclear device designed by the Lawrence Radiation Laboratory, not a data analysis tool developed by Langley Research Center. ChatGPT just lies, all the fucking time, constantly, incessantly, I don't understand how seemingly smart people just try using it and trusting its results without any attempt at verification.
+1
I don't think I've ever gotten a truly useful answer out of it, though I haven't tried in a while. The AI worship around here is really annoying and dare I say may blind some people to its limitations.
Ironically, I find the art generators vastly more impressive than the LLMs, despite the former getting far more hate. Of course that may be why.
I don't even think it's necessarily useless for data-gathering or analysis, but when it's used in such a way the results MUST be verified because it WILL just lie.
I think that ChatGPT is useful in finding specialized nomenclature. E.g., if one is looking for a named law or model or theorem, and one can describe what the law/model/theorem is about in layman's language, the LLM can be useful in finding the name of the thing.
On the other end, if one wants to survey possibilities and select them according to some measure, e.g. 20 lowest-boiling inorganic gases, good luck, unless some human has already compiled such a list - even if every candidate is already documented in Wikipedia, in the LLM's training set.
( And I've been steering clear of politically controversial questions, where the RLHF Woke indoctrination is likely to obscure what the _capabilities_ of the technology really are. )
Whatever Google embedded in its search is pretty awful. I just tried
> What is an example of a molecule with an S4 rotation reflection axis but no mirror planes and no center of inversion?
It replied with
>A classic example of a molecule with an S4 rotation reflection axis but no mirror planes or center of inversion is methane (CH4); its tetrahedral geometry allows for three S4 axes, making it a prime example of this symmetry element without additional symmetry features like mirror planes or a center of inversion.
which is just wrong. Methane has 6 mirror planes. In fact, _this_ LLM "knows" this. If I ask it
>How many mirror planes does methane have?
I get:
>Methane has 6 mirror planes.
>Explanation: Since methane has a tetrahedral geometry, you can create a mirror plane by selecting any pair of hydrogen atoms and passing a plane through them and the central carbon atom. This gives you 6 possible mirror planes.
That's funny, because I went to the effort of verifying _your_ assertion that ChatGPT gets this wrong, and it turns out, you're wrong:
https://imgur.com/prV9Mch
That's with ChatGPT 4o.
If you had used Google, you would find that this result is also incorrect. Ripple was a high-yield nuclear device concept.
https://direct.mit.edu/jcws/article-abstract/23/2/133/101892/Ripple-An-Investigation-of-the-World-s-Most?redirectedFrom=fulltext
Old-fashioned Google turns up first that Ripple is some sort of cryptocurrency, then adding in LRL gives me:
https://direct.mit.edu/jcws/article-abstract/23/2/133/101892/Ripple-An-Investigation-of-the-World-s-Most?redirectedFrom=fulltext
"In 1962 the United States conducted its final atmospheric nuclear test series, Operation Dominic. The devices tested were designed and built by the Los Alamos Scientific Laboratory (LASL) and the Lawrence Radiation Laboratory (LRL). During the test series, LRL conducted four tests of a radically new design called the Ripple concept. Tests of the Ripple concept demonstrated performance characteristics that eclipse those of all nuclear weapons designed before or since. For numerous reasons discussed in the article, the Ripple concept was not pursued, but the technology it pioneered has been in continual development—for peaceful purposes—to this day. Until now, very little has been known about these tests and the concept behind them. This article, the result of a multiyear investigation, sheds light on the Ripple program for the first time, allowing for a largely complete account. Included are the origins of the concept and its designer, the technical characteristics, the significant role played by the geopolitical context, the test series in detail, and the cancellation and legacy of the program."
So I'm going with Chastity here since she knew what she meant in the first place and the ChatGPT did not suggest it as one possible answer, and did get the LRC and LRL confused when replying to her.
I just noticed - LRL does not correspond to Langley Research Center (LRC). So yeah, the AI is too stupid to work out that "L" and "C" are different, it's just regurgitating something from its training data.
If you google household ingredients for washing a floor, you get a bunch of hits for vinegar, vinegar & dishsoap, and vinegar and baking soda combined. The last of these is nonsense, because the 2 active ingredients cancel each other out. I asked GPT4 for ingredients a few months ago and it gave me vinegar and dishsoap. I asked whether adding baking soda would help, and it agreed heartily: "Adding baking soda to your cleaning mixture can enhance its effectiveness, especially for tackling tough stains and odors on linoleum floors."
Jeffrey Soreff, a chemist who posts here frequently, has posted many wrong answers it has gotten from GPT4 for chemistry questions that are easy to look up the answer to. Recently he posted that it doesn't understand what a tetrahedron is -- can't make an image even when he explains that it's a pyramid with a triangular base.
GPT4 often does no more than compile the most frequent google hits, but then it packages them so that they sound authoritative. I don't think either a superficial google search or a chatbot query is adequate for questions like OP's. You have to google for answers and then you poke around and check the one you think is probably accurate. If you don't know how to poke around and check that particular question then you just don't know for sure what the answer is.
It's also kind of rude to chatbot an answer to somebody's question. With the same amount of typing the person could have asked a chatbot this question instead of you. Obviously they are looking for a different source of information.
"rude"? What is "rude" about it? Did I insult anyone? Some people do seem to be offended, though…
I openly stated my source. Feel free to ignore such comments.
Edit: I am actually quite surprised about the general reaction to my lighthearted comment. Admit it – "double-check them before building your own reusable launch system" is at least worthy of a smirk, no?
I notice you don't respond to my main point, examples of inaccuracy. Anyhow, about the rudeness: It's sort of like answering somebody's question by sending them to this: https://letmegooglethat.com
Yes, I did get a smirk out of it, but Victualis is right.
Here is what ChatGPT has to say on the topic:
Using LLM output in online discussions or forums can come across as impolite for a few reasons, especially if it’s clear that the response isn’t a personal one:
Lack of Authentic Engagement: Posting a generated response might make it seem like the person didn’t genuinely engage with the question or community. People generally appreciate thoughtful replies that show understanding and connection with the original question or topic.
Unfiltered or Imprecise Information: Sometimes, LLMs might generate responses that are too generic, overly detailed, or miss subtle context cues that a real person would catch. This can make the response feel like an awkward fit for the conversation and might even be misleading if not carefully reviewed.
Lack of Personal Touch or Effort: Communities often value responses that show effort, nuance, or personal insight. Posting LLM responses can seem dismissive, as though the question wasn’t worth the time to answer individually.
Potential for Misinformation: If people recognize a response as AI-generated, they may also distrust its accuracy. Unless the response is verified, it might not meet the standards of a community that values reliable, accurate information.
Risk of Redundancy or Dullness: LLM responses may sound “robotic” or repeat information already available in standard sources, lacking the freshness or original thinking that people often look for in online discussions.
When using AI-generated answers, giving credit or adding a personal summary can help avoid these pitfalls and maintain the quality of engagement.
Actually, it's not nonsense, I've heard of baking soda + vinegar. You apply baking soda to the grease on the floor, then add vinegar and mop it up. I don't know how well it works, because I've never tried it, but it's not implausible. I think the idea is that this makes the grease lift off, but I'm not sure. Or maybe it's just something that someone tried, and it worked for them.
In any case, what I asked AI about would not have worked. I asked about just mixing it into the wash water along with the dish soap
and vinegar.
> Would you have felt better if I had posted numbers from a superficial Google search? Or from Wikipedia?
Yes. Because those would have contained context and metadata and citations which could be further checked and traced back, and terminate in a NASA PDF or something. Even if they had contained literally the same ex cathedra statement word for word as ChatGPT, you would be no worse off in trying to factcheck it, and the *lack* of all that would have told you something useful: that it is a low-quality source of dubious veracity that may well be wrong. Meanwhile, some LLM obiter dicta kills all curiosity and is the junk food of writing: fattening webpages while providing no nutrition.
Just want to chime in that I agree with Victualis here. Results from a superficial Google search or from Wikipedia would indeed be preferable.
Also, "results from a Google search" should, ideally, not be credited to "I Googled and found this", but "[this site] says...", because the fact that you found it from Google doesn't tell you a whole lot about its reliability. (Google would like the fact that they brought it up to mean something, though.)
Agreed. I tend to include urls with information I find, so that people reading the comment can see exactly where I found the information (and, usually, what organization it is associated with).
On a related note - even a very superficial Google search is often improved by including the name of a plausibly authoritative organization in the search terms. ( Bluntly, I got a bit sick of the back-and-forth on the shuttle H2/O2 tank meta level questions above, so I did a cursory Google search - but including _NASA_ in the search, and then commented, quoting from the NASA site about the shuttle and citing the URL. )
I would trust a google search more than an LLM - at least then I know that at least one real person on the internet believed it.
For anyone that did want to know the numbers, it looks like the above is broadly right. I've done a brief Google but haven't dug especially deeply (though I see a bunch of sites that seem to agree).
The fuel tank was jettisoned after main engine cutoff (MECO) but prior to orbit (https://en.wikipedia.org/wiki/Space_Shuttle_external_tank) - the shuttle then used its online maneuvering system (OMS) engines to get thr rest of the way to orbit.
Speed at MECO was 17,000 mph (https://pages.cs.wisc.edu/~yat/space/facts.htm) which is 27.4 km/h. 17m seems pretty rounded but about the right number, and I can't find any other numbers out there. It's pretty close to the 28k that Claude gave.
This is broadly right; a massive, ongoing complaint in the 80s and 90s from the folks who would broadly be SpaceX employees today was that we constantly threw away the ET when it was almost at orbital velocity and that we should have found a way to push it to orbit and use it for space stations.
Many Thanks! I did much the same thing, and got essentially the same answer.
If you are going to post the output of some system of norms and values, please provide evidence that this set of norms and values has been backtested across millennia of human culture and indeed promotes human thriving. We can all judge others easily based upon our own standards as the absolute correct stance. We also don’t need more imaginary moral systems floating around for impressionable algorithms to find and become the foundation of future moral myths.
I found the comment amusing, and I've already chiseled it on a tablet and buried it in the backyard.
I think this is unreasonably strict. From my experience I'd estimate that LLM provided figures are no less accurate than cursory Google searches. I'd be very surprised if Claude gave figures outside reasonable confidence intervals for questions like this more than 5% of the time.
I think either people heavily exaggerate hallucination rates on cutting edge models because of bias, or otherwise I'm very very curious to see what kind of tortured queries they're giving to get such inaccurate results.
LLMs are unlikely to give figures less accurate than a cursory google search, the step where the error is more likely to enter in here is when you ask the LLM to explain something, and it gives an answer which it justifies based on the presumed relevance of those figures. The presumption in accepting the LLM's answer is that it's more likely than an uninformed person to be generating a correct answer, for which those figures are an appropriate explanation. In areas where the average person doesn't have enough domain knowledge to generate the right answer with some cursory googling, LLMs are wrong quite a lot, but this also makes their inaccuracy hard for the average person to check. It's easiest to check on straightforward factual matters which you're familiar with, but you know the average person is not.
>It's easiest to check on straightforward factual matters which you're familiar with, but you know the average person is not.
For example, I recently asked ChatGPT 4o "Which Valar took part in the War of Wrath?"
The actual answer is that the published Silmarillion doesn't explicitly name any Valar as doing more than agreeing to the expedition, uses language that's ambiguous but can be (and often is) read to imply that some or all of the Valar are directly involved (referring to "the Host of the Valar" and "the Might of the Valar" doing various things in the war), and includes details that are usually read as implying that the Valar didn't accompany the expedition (namely, the Maia Eönwë commanding the army rather than Manwë, Oromë, or Tulkas, and after the final battle Eönwë ordered Sauron to return to Valinor for judgement by Manwë as he felt he lacked the authority to judge a fellow Maia).
ChatGPT answers this question okay. It glosses over the ambiguity of the text, but the overall framework isn't badly wrong, and it offers up some mostly-plausible speculation on how four of the Valar might have been involved. It does mention some stuff that Tulkas did in the Book of Lost Tales (the earliest version of the story), but doesn't seem to notice that that was a BoLT-only part of the story.
My follow-up question, "Did the involvement of the Valar differ in different versions of the story?", intended to tease out the problems in the bit of the answer about Tulkas, resulted in some pretty bad hallucinations. For example, it says that in the Book of Lost Tales, "Tulkas, Manwë, and others were imagined as physically fighting in the War of Wrath." Tulkas did explicitly take part in the War of Wrath in BoLT, but Manwë and the rest of the Valar emphatically did not. Manwë actively opposed the expedition in BoLT and Tulkas, most of the Elves of Valinor, and many of the "Children of the Valar" (i.e. Maiar) defied him and went anyway. ChatGPT also badly overstates Tulkas's involvement in later versions of the story (where he actually isn't mentioned at all) and brings up some stuff that I'm pretty sure is hallucination about Ulmo being explicitly involved in some versions.
I recently posted an example of such an encounter with an LLM (I always ask Google Gemini, because I don't want to sign up for an account, but I already have a Google account).
I asked a question taking the form "here is a couplet from a broadsheet ballad - what does the singer mean by these lines?", and noted in an earlier thread that the answer I received was abysmally bad.
But, of note, I got a response in that other thread saying that I shouldn't be calling that a bad answer because it looks like a good answer if you're unfamiliar with the facts.
It's still not clear to me why that should make the answer better.
I'd say it makes the answer worse! Because if it's *obviously* wrong, you're going to catch that and not propagate it, but if it looks plausibly right, you might be fooled into thinking that it's trustworthy unless proven otherwise.
Along with other types of nerdery more commonly represented on this blog, I'm also a martial arts nerd, and I've spent a fair amount of time asking ChatGPT questions about martial arts. My takeaway is that ChatGPT is quite familiar with the sorts of names people tend to mention in association with martial arts, the sorts of adjectives people use and which styles are most frequently mentioned, but its accuracy in actually answering even basic and straightforward questions related to the martial arts is much worse than even cursory googling. But to someone who doesn't actually know anything about the subject in question, it sounds perfectly credible,
>I'm very very curious to see what kind of tortured queries they're giving to get such inaccurate results.
It doesn't take tortured queries, see my comments at
https://www.astralcodexten.com/p/open-thread-353/comment/74496967
and
https://www.astralcodexten.com/p/open-thread-353/comment/74504922
tldr - in the first case ChatGPT o1 wound up getting the explanation for the color of CuCl4 2- badly wrong, and I had to lead it by the nose to force it to finally cough up the right answer ( detailed transcript of the session at https://chatgpt.com/share/671f016f-3d64-8006-8bf5-3c2bba4ecedc )
in the second case whatever Google is embedding in its searches (Gemini???) falsely claimed that methane has no planes of mirror symmetry (in the course of giving methane as an incorrect answer to my original query)
Just to be clear: I _WANT_ AI to succeed. I would very much like to have a nice quiet chat with a real-life HAL9000 equivalent before I die. It is probably the last transformational technology that I have a shot at living to see. But it is _not_ reliable (nor at AGI) yet.
If I ask an LLM something that is easy to Google, it is likely to give a response that is close to that answer. I seldom ask an LLM for such things, because I usually try searching first based on likely keywords (I often want to go deeper so I need useful further links, not a tepid summary, and this saves time). I probably have a higher prior on incorrect hallucinations than someone who goes to ChatGPT first.
LLMs are currently more likely to be misleading...but perhaps not by a huge margin. Most of the answers I get to web searches are quite wrong, and usually obviously so. (Most of them are so wrong they're irrelevant.) But I ignore the (blatantly) wrong search responses. LLMs tend to give one answer, and when it's wrong, it often isn't obviously wrong.
Automating human judgement about what is blatantly wrong is what we now need.
He reported that they *were* from a Chatbot (and which one) which is the important part. He gave his source. Most web searches don't yield a verifiable source either, and some of them return invented answers. (Not being invented by an LLM doesn't mean they weren't just invented.)
Approximate confirmation of the numbers from
https://www.nasa.gov/reference/the-space-shuttle/
>After the solid rockets are jettisoned, the main engines provide thrust which accelerates the Shuttle from 4,828 kilometers per hour (3,000 mph) to over 27,358 kilometers per hour (17,000 mph) in just six minutes to reach orbit. They create a combined maximum thrust of more than 1.2 million pounds.
Computers at the time weren't good enough to control the descent to the degree of accuracy required.
The Space Shuttle was a terribly suboptimal design hobbled by political compromises. It is a wonder it looked and worked as well as it did. NASA was already ossifying into a terrible bureaucracy, slowly losing its skills and spirit from the glory days of Apollo. Adding fuel tank recovery and refurbishment would have added years and billions of dollars to the schedule and budget, it was not even seriously considered. Even for the modern SpaceX catching the booster is pretty audacious, and it was maybe one second from a failure, according to the Musk's accidental Diablo sound overlay.
Eh, there were politics involved yo be sure, but the real hobble on the space shuttle was the Air Force's 15 x 60 ft, 65 k lb LEO, 40k lb polar payload requirement with full launch site return capability being maintained. While it is impossible to portray the mathematics of all of this in any sort of short post, the bottom line is this: the primary design goal of the space shuttle was to enable the USAF to throw large and heavy militarily relevant payloads into militarily relevant orbits at a high launch cadence. This is a very challenging design goal, which came with the attendant high costs. Unfortunately, these high costs had to be paid for every mission, even civilian scientific ones that could have accomplished their goals with a far less capable launch platform.
The Space Shuttle was and still is a technological marvel, but with a price tag to boot. Forcing the civilian portion of its users (who ended up being by far the majority use case) to bear the burden of the exceptional costs for corner-of-the-envelope military use is the grand tragedy of the program. There should have been a much cheaper civilian version that would probably still be flyable today.
Right, that is a good point. A better (and more expensive during the design stage) approach would have been having a configurable setup where, like with the SpaceX Falcon Heavy, the boosters could be recoverable unless the mission profile forces them being expended.
It sounds like it's a tragedy in the same way that paying for fire insurance for years and never letting your house catch on fire is a tragedy.
It depends what the downside of the Shuttle not having the specific military mission capabilities would have been, had the occasion to use them come up.
There are two parts of this:
1. Could a non-shuttle launch vehicle perform the mission?
2. How much of a luxury was the mission, i.e. what happens if we can't do it at all?
For 1, I understand the answer was mostly yes. Launching large spy satellites (which I understand to be the main driver for payload size and polar orbit capabilities) wound up mostly being done by disposable boosters (Delta and Titan, IIRC) anyway.
The main leftover mission I'm aware of that other launch systems couldn't do was to snatch a Soviet satellite out of orbit and return to Vandenberg. I am not familiar with the thinking that this would be a message or desirable thing to be able to do, so I will tentatively classify it as a luxury mission.
It's a tragedy in the many proposed expeditions and payloads that may have advanced our scientific understanding were never allowed to happen, because launch costs consumed so much of always-finite research budget.
Take the recent Europa Clipper mission- it was originally required by congressional mandate to fly on the SLS, which according to the NASA OIG would have cost a minimum of 2.5 billion USD, on a program that costs 5.2 billion overall (so essentially increasing total expenditures by 50%). Once someone actually did some accounting, congress relented and allowed it to launch a few days ago on a Falcon 9 Heavy for a mere $178 million. It will take longer to reach Europa, but this is a savings of $2.3 billion at a minimum, which can presumably be put to some better uses.
Like launching a second Europa Clipper on a Starship in two years to get there before the first one does.
Alas, Starship can't get past Low Earth Orbit, nobody has an in-space maneuvering stage that can fit inside a Starship and take a Clipper to Europa, and neither of those things is going to change in two years even if you tell the engineers to get started today.
How many enormous projects are ruined by requirements which are decided in advance, which turn out to be unachievable, but which then can't be changed later on once we learn more? It seems like the answer is "most of them".
If Starship had stuck with its original specs, it wouldn't have worked -- they needed to try a few things and figure out what was practical and what was not. On the other hand, Elon doesn't have a flawless record here either, and the Cybertruck suffers from similar problems where it's a worse vehicle than it would have been if they hadn't made certain dumb commitments at the planning stage.
In addition to what others are saying, the big orange fuel tank was actually the cheapest part of the Shuttle, by far. It had no engines, so it was basically just expensive pipes, tanks, and insulation.
In order to make that reusable not only would significant weight be added, it would also make it more expensive. The savings in reusability would have been more than cancelled out by the lost payload and refurbishment costs.
That is a very difficult technical problem that wasn't technically possible until SpaceX made it work it 9 years ago.
I don't think anyone else does it even today?
As I recall (from reading a fantastic book on the history of the Challenger disaster which I recommend here without any reservations https://www.amazon.com/Challenger-Story-Heroism-Disaster-Space/dp/198217661X); the original plan was to have two part launch system, where the shuttle is first flown on a carrier to a suitably high altitude and then launched from there for whatever it's mission was.
Both parts were envisioned to be re-useable but the cost was well, astronomical.
In this case, the chatbot got it right - the external fuel tank carries all the propellant the shuttle's main engines will use taking the shuttle all the way to orbit (well, except for a small circularization burn with the maneuvering thrusters). So the tank can't be discarded until the Shuttle is at orbital velocity, roughly 8 km/s. At that point, there's no question of it coming back to the launch site or parachuting into the ocean anywhere near the launch site; it's going to come down halfway around the planet.
And it's going to be subject to the same sort of reentry heating environment as the Space Shuttle itself. A simple aluminum tank with just some spray-on insulation to keep the propellants chill before launch, is not going to survive that. A tank which could survive that, would probably weigh enough that the already-marginal Shuttle couldn't carry any actual payload (and certainly not the big military spysats that were part of the requirement).
The only remotely sensible proposal for reusing the Shuttle external tanks was to take them *all* the way to orbit, and then use them as pressurized habitat or propellant-storage elements on a large space station. A single external tank would have more interior volume than all the pressurized elements of the current ISS combined. But nobody had the budget to build a space station that big even if they got the pressure vessels delivered to orbit for free, and their orbits would have decayed long before NASA got around to using them, so they just ditched them in the ocean instead.
The space shuttle fuel tank was just a big tank, it didn't contain anything capital-intensive or fancy like advanced rocket engines. Those were on the Shuttle itself, and those were recovered.
Even recovering the SRBs didn't make sense, because they were only "reusable" in a marketing sense: the cost of fishing them out of the ocean and refurbishing them was greater than the cost of just manufacturing additional SRBs, but reusability was one of the justifications for the expense of the shuttle program so reusable that was deemed.
I mean, the simple answer as to why they couldn't recover the fuel tank was because the entire launch stack was designed around a set of premises, and one of those premises is that that tank was going to be jettisoned and break up on reentry instead of being recovered, and if they'd wanted to recover it that would have required a fundamentally different spacecraft than the one they designed. The Super Heavy Booster is an entire rocket, with engines and electronics and computers and cameras and radios and miles of wiring and sensors. The external tank was just a tank.
It never really got past power point engineering as far as I can tell, but ULA had a proposal for Vulcan that involved basically detaching the engine section and recovering only that - for basically that reason, the majority of the cost of the rocket is the engines and avionics, while the tanks (basically just big empty aluminum cans) are bulky, kind of delicate, and therefore hard to recover.
What would happen to our society if a large-scale, long-term blackout occurred? There is a high chance that it would get quite bad very quickly. Transportation and health services would likely cease to function within a few days, and many people would face food and water insecurity almost immediately. This highlights the urgent need for greater investment in preparedness, as there aren't even exercises to train those responsible for managing such crises. If you're interested in more details, I have written a new post in my living literature review that offers a deep dive into the consequences of blackouts: https://existentialcrunch.substack.com/p/the-consequences-of-blackouts
> What would happen to our society if a large-scale, long-term blackout occurred? There is a high chance that it would get quite bad very quickly.
I think you've independently reinvented the best argument for "prepping."
It's less for the Big One or Zombie Apocalypse and more for longer stretches without power and services brought on by severe weather and / or state inadequacy.
Also the strongest argument for your own solar + battery setup, re Scott's last post.
I like that you're optimistic and think something can be done collectively - I personally believe the only "real" solution is personal and independent, because you can actually prep and get your electricity off the grid through your own efforts without having to persuade a lot of other 'general public' people who don't believe in thinking ahead.
Yes having a backup for smaller catastrophes makes sense, but I generally think that for everything taking longer than ~ 2 weeks, is is important to be able to rely on the state. Otherwise, everything will turn quite bad.
Generally, I don't think long term prepping really works. I've been looking into global catastrophes and societal collapse a lot and my conclusion is that prepping for a duration longer than two weeks mainly buys you the privilege of dying a bit later than the rest.
I think that if the society you live is generally unprepared, most people would die. You might live longer if you are prepped, but I doubt it would be be a great life to have. But, we could plausibly make it through large catastrophes, if we take the time to prepare. And generally, I think we are going in this direction, albeit slowly. For example, more and more countries are starting to consider large scale catastrophes in their risk assessments.
Prepping will work fine as long as you prep properly, which nearly no-one does.
You don't have a bunch of water and food that will last N months, because as you point out, some armed gang will show up and take it from you, and all your years of prepping will be for naught.
If you don't have guns and tactical training, your prepping is missing its foundation.
This is a common fantasy but just seems like a way to make your eventual death more cinematic. I've seen Die Hard and Home Alone too, but I don't like my chances against an armed gang, because the defining aspect of gangs is that they outnumber me. I mean, *maybe* I can use my superior planning and knowledge of the terrain to my advantage and fight off dozens of dudes on my own, but that sounds like a fantasy rather than a plan.
If it comes down to a world where we're having gunfights over the world's remaining food resources then I'm probably dead. But there's a bunch of far more likely scenarios where having a bunch of sensible preps will turn a horrible situation into a much more comfortable one, even if the people without preps aren't dying, they're just spending their days standing in line waiting for supplies.
I'm not a real prepper, but I think part of prepping is also training. No matter how much food you have stored, you'll run out eventually. Do you have the skills to obtain more, whether by growing, finding, hunting, or something?
Same goes for everything else: heating, cooling, washing, repair work, etc.
In Gaza, the optimal approach to prepping would probably involve finding out a lot of intelligence about Hamas and then offering it to Israel in exchange for passage to elsewhere.
At some point, you're prepping for the collapse of civilization and that's beyond your resources. But a couple weeks without power is not the end of civilization, it could happen, and having some notion of how you'll heat your home/charge your phone/cook your meals is probably smart.
A comment in support: a few years ago, we lost power for 11 days after an ice storm. Now, the greater area we were in didn't lose power for nearly that long, but because we live in a rural area with a lot of trees, there were hundreds of line breaks and it took a long time to fix them.
In that scenario, the best prep seems to be a car and some money.
Depends. I can see solar panels and batteries, and not doing much at night for a couple of weeks. It wouldn't even be that bad. You should have a bit of food storage, but that should be easily doable in a rural environment (i.e. space wouldn't be a problem).
What do preppers do about water? Seems like that's the biggest barrier if there's a long term catastrophe, unless you live but a river, and it's hard to store a month's worth of water like you can with food
Not really. https://theprepared.com/homestead/reviews/best-two-week-emergency-water-storage-containers/