1027 Comments
User's avatar
Dragor's avatar

There's a conversation about dating below, and I saw a german comedian describe a trick that hooked her. https://www.youtube.com/watch?v=Q8im9MXbV-o

I don't want to spoil the punchline, but I unironically think that would work? Like, it's an innovative way to have face to face interactions in a way that gives you the opportunity to be kind and make a good impression. He may even have been picking the people he transacts with based on dating preference?

Ebrima Lelisa's avatar

Can you please spoil the punchline

Dragor's avatar

She's been using airbnb to make contact with guys she wants to pursue relationships with. She buys something from a guy, and she realizes he actually isn't trying to sell stuff to make money, he's doing it to meet women. She hooks up with him.

Eremolalos's avatar

I don't understand how you use airbnb to make contact with guys (except for guys who work for airbnb, of course).

Dragor's avatar

Honestly, that bit makes less sense to me, in part because I'm not sure it would work as well and in part because it's not as much of her joke. My sense is that she's using it to filter for hobbies/home ownership then flirting with the hosts, but she might be exaggerating that.

Ogre's avatar

I don't either. AirBNB guests come from far away. So it is not dating in the sense of getting to know people for relationships, as one typically does that with locals - relocations is costly. So it must be hookups with tourists.

Eremolalos's avatar

But I still don't understand what she *does.* If you are looking into renting a certain airbnb do you talk to the owners? Is that what she does? But if so how does she know the owners are datable males of the right age?

Dragor's avatar

Oh. Airbnb lists facts about the "host," and the host is usually the owner.

B Civil's avatar

People used to sell Bibles door-to-door you know…😆

Bob's avatar

Did people who submitted to ACX grants get a confirmation by email, or any other email communication afterwards? I haven't received anything, so I'm trying to figure out if I mistyped my email.

Deiseach's avatar

I realise as a European (more or less) I should not be asking this on here, but then again America has a lot of cooking traditions derived from the mainland of continental Europe, so here goes.

Can anyone tell me what the hell it is with Germans and cheese?

I've been watching some German cookery channels recently and they seem to put cheese in *everything*. Cooking fish? Cooking vegetables? Cooking bacon? Just grate up some cheese and slap it on there!

(I'm only surprised nobody has yet put cheese into one of the dessert recipes).

These channels came to my notice by accident and they're fascinating: it's almost like food. I'll be watching and nodding along like "Uh-huh, that seems fine; okay wouldn't have thought of that myself but it's not totally crazy" and then one more step and they take a sharp left turn into What The Hellsville.

E.g. you got store-bought rolls of puff pastry and rashers of streaky bacon? Okay here's what you do! Unroll the puff pastry, lay out your bacon on top. Yep, following along so far. Brush with tomato ketchup. Huh, well okay, I see where you are going. Scatter over some dried oregano. Yeah, herbs, that's fine. If I'm doing this myself might switch it out for something else but keep going, you're holding my attention. Brush the edges of the pastry with beaten egg. So far, so orthodox.

Then comes the cheese.

Grate up 200g of a semi-soft white cheese and scatter it over the herby, ketchupy bacon strips. Oh, and you must grate the cheese yourself by hand, we can't be doing with buying pre-grated soft cheese like Mozzarella or the likes. Nope nope nope, if you don't have at least three graters of different sizes and construction, what are you even watching German home cookery channel for?

Okay, now you've lovingly scattered your grated cheese on top, here come the scallions (green onions). Well I like scallions myself so I can't object too much but it is rather a lot on top of cheese on top of oregano on top of ketchup on top of bacon. But clearly I have not the heart and stomach of an emperor, and an emperor of Germania too. Chop up your green onions, scatter on top. Done that? Good, now here come the hardboiled eggs.

Of course you have already hardboiled some eggs. Remember, it's not a German home cookery recipe without cheese, and where there is cheese, can hardboiled eggs be far behind?

Now grate your eggs, on the second different type of grater (yes this is why you have so many graters for specific functions). Who the hell grates eggs instead of chopping them up with a knife? We do, of course!

The grated eggs go on top of the chopped scallions on top of the grated cheese on top of the oregano on top of the ketchup on top of the bacon on top of Old Smoky - no, sorry, back to sanity (hah!)

Now we roll up the pastry into a sausage shape, then carefully twist and form it into a curled-on-itself round, ready for baking.

And while that's baking, we make the dipping sauce!

Get your third, mini grater. Yes, the one for grating cloves of garlic. Why grate cloves of garlic instead of using a garlic press or just a knife and choppity-choppity? Are you even German to ask such a question!

Now once you have grated your cloves of garlic and avoided grating your fingertips into the bargain, you get a pickled gherkin and grate that in as well. No, you don't need a different grater for this, you are graciously permitted to use the garlic grater.

Chop up some parsley and add that to the mix. This is the deceptively normal step to lull you into a false sense of "oh thank God, I recognise this part from ordinary cooking".

Add some natural yoghurt and mix. Now you have your dipping sauce!

Remove the part-baked pastry from the oven, brush with beaten egg glaze, scatter over some chopped up feta. (Why this step couldn't have been used instead of adding grated cheese earlier, or why the two cheeses are needed, I cannot say since I am not a German home cook).

Return to oven and bake for a further ten minutes, then remove. The feta won't even be melted so what is the point of this superfluous step I cannot say, but it's Germans and cheese. That is all ye know on earth or all ye need to know. Slice off a section of this concoction (the interior of which resembles one of those infamous AI recipes), plate it up, and spoon over the sauce. Enjoy!

(Alternately, now go look for your lost marbles).

Schneeaffe's avatar

Austrian here, Im as surprised as you are. Though we like to make fun of german cuisine, I definitely wouldnt consider this a typical use of cheese, nor have I heard of grating whole eggs. Consider that its maybe not realistic, or if it is only in the north. You also dont have three kind of graters, you have a four-sided one with all different surfaces.

Deiseach's avatar

"You also dont have three kind of graters, you have a four-sided one with all different surfaces."

I also thought this, until I was enlightened 😁

You have:

(1) One tiny four-sided grater specifically for grating garlic. Yes, sometimes they use a garlic press or sometimes they chop the cloves with a knife, but apparently for Real German Cookery you need to grate your garlic on a tiny grater that you don't use for anything else (or I have not yet seen it used for anything else).

(2) One standard four-sided grater for grating everything else, from eggs to carrots to courgettes to potatoes to cheese.

(3) One conical grater, ditto.

(4) One Special Grater (so it is termed in the subtitles), for what to me looks like julienne strips, for carrots, potatoes, courgettes.

(5) Sometimes if we're really feeling fancy we'll pull out the zester for grating garlic or cheese directly over the pan.

I had no idea there were so many types of graters, but seemingly it is so!

https://www.knivesandtools.ie/en/ct/graters.htm

The Ancient Geek's avatar

Cheese:Germany

::

Butter:France. ?

Dragor's avatar

When I was in Germany, they had really good cheese. Granted, I really like cheese. I do actually put cheese in random foods just for some tasty cheesy goodness, especially if I didn't salt them when cooking.

Chance Johnson's avatar

Huh, this must be where America gets it. We are addicted to cheese, as well. Newcomers from Mexico will go to the self-proclaimed authentic style Mexican restaurant and express shock that we've drenched their favorite recipes with cheese.

MarsDragon's avatar

...ketchup? I can see a marinara sauce, then if you stop before the eggs you've basically got a kind rolled up pizza, but straight ketchup? What the heck?

On the other hand, rolled up pizza sounds like a fine dish. Don't think it would need a dipping sauce, though. Probably put the garlic in the pizza, that saves a step. I will say that I have a little ceramic dish with spikes on the bottom (handmade, spikes are from poking with a chopstick) and that grates/mashes up garlic much, much easier than chopping it with a knife. I have stopped using a knife because the dish is so much more convenient.

moonshadow's avatar

...I'm lowkey drooling now. Thanks for that :)

There's a perfectly british pub down the road from my office. We go there for lunch once a week. They are not, as far as I am aware, German. But they also have a curious relationship with cheese.

Specifically: some menu items contain no cheese, and this is fine. But the ones that do, well. You will be getting ALL OF THE CHEESE. It is not subtle. Literal pack of cheese on your plate, front and centre, all else is garnish.

I mean, don't get me wrong, it's nice. But you need to be in the right mindset. A certain amount of determination is necessary.

I don't know what it is about work lunch pubs. My first job, we used to go to the village pub, and they had a habit of putting a layer of melted cheddar on everything. "Lion and Lamb", british as it gets, there you go. Cheddar on the chips was their big thing.

Anyway, when I visit Germany, it's all about the bratwurst and sauerkraut for me. Few seem to share my love of pickles; don't get me started on Polish gherkins. Here in blighty it's all about intensely acidic, but there are so many more possibilities than that.

> nobody has yet put cheese into one of the dessert recipes

...no cheesecake?

I've been trying to recreate https://en.wikipedia.org/wiki/Syrniki . My grandmother used to make them when I was very young, but sadly never shared her recipe. I attempt what recipes I find online, and people are very polite and tell me they are nice, but it is neither the taste nor the texture I remember.

Deiseach's avatar

I think the cheese in pubs is because it's cheap protein and if you grill it on top of sandwiches it's very tasty and filling. It also makes it look fancier than it is, were it just a plain old sandwich.

The Ploughman's Lunch was re-invented in the 50s in Britain and promulgated in pubs in order to sell more cheese: in its most basic form it consists of bread, pickled onions, cheese and beer - very simple, perfect for customers who wanted something to soak up the beer so they wouldn't just be drinking in the middle of the day, sold to the publicans as "the saltiness of the cheese will make the customer buy more beer and so increase your profits", and it didn't require anything fancy in the way of a kitchen.

https://en.wikipedia.org/wiki/Ploughman%27s_lunch

"The OED's next reference is from the July 1956 Monthly Bulletin of the Brewers' Society, which describes the activities of the Cheese Bureau, a marketing body affiliated with the J. Walter Thompson advertising agency. It describes how the Bureau

exists for the admirable purpose of popularising cheese and, as a corollary, the public house lunch of bread, beer, cheese and pickle. This traditional combination was broken by rationing; the Cheese Bureau hopes, by demonstrating the natural affinity of the two parties, to effect a remarriage."

You can get ready-made sandwiches in shops that go by the name of Ploughman's and I like them myself as an easy, convenient on-the-go lunch (but they generally don't have any onion in them, that's replaced by a pickle like Branston's):

https://en.wikipedia.org/wiki/Cheese_and_pickle_sandwich

Ogre's avatar

Then why not the "canonical" grilled sandwich of bread, butter, ham or pepperoni or salami or something and then cheese? So the basic pizza setup. Why pickled onions?

Deiseach's avatar

It's based (or supposed to be based) on traditional food of farm workers/peasants. Meat would have been scarce, and the English didn't have the tradition of salami and cured sausages. So cheese replaced meat, and for a relish there would be onion, raw or pickled. Pickled is probably more flavourful in a different way to raw onion.

You can get very fancy with that sort of 'traditional' meal and include meats, tomatoes, hard boiled eggs, etc. But the basic version would have been bread and cheese, then maybe some onion and beer. Revitalising this type of meal in Britain post-war, to market cheese, items such as salami and pepperoni would have been very exotic foods!

https://en.wikipedia.org/wiki/Ploughman%27s_lunch

"The reliance on cheese rather than meat protein was especially strong in the south of the country. As late as the 1870s, farmworkers in Devon were said to eat "bread and hard cheese at 2d. a pound, with cider very washy and sour" for their midday meal. While this diet was associated with rural poverty, it also gained associations with more idealised images of rural life. Anthony Trollope in The Duke's Children has a character comment that "A rural labourer who sits on the ditch-side with his bread and cheese and an onion has more enjoyment out of it than any Lucullus".

While farm labourers usually carried their food with them to eat in the fields, similar food was for a long time served in public houses as a simple, inexpensive meal. In 1815, William Cobbett recalled how farmers going to market in Farnham, forty years earlier, would often add "2d. worth of bread and cheese" to the pint of beer they drank at the inn stabling their horses. In the 19th century the English fondness for serving cheese and bread with beer was noted, as "the very dryness and saltness heighten thirst, and therefore the relish of the beer".

moonshadow's avatar

...a sandwich is a totally comprehensible food, though. I understand sandwiches. What I am /not/ expecting is a literal pound of brie that someone shoved in an oven in its packaging, then placed on my plate without further ado or much else in terms of accompaniment.

I mean, don't get me wrong, I like brie - it's why I ordered it; and I'd happily spend an afternoon sharing it with someone; but trying to fit it into a lunch break as a meal for one without any substrate or things to dip or even so much as topping was a most curious experience.

The veggie on the team learned nothing from mocking me, and had a very similar experience with the "halloumi burger" the next week, so I did get my own back; and now we all know to exercise due caution around mentions of cheese on the menu unless absolutely ravenous.

Deiseach's avatar

Okay, that's different from the usual run of 'pub grub'. I imagine they're trying to appeal to a 'modern audience' (if you'll excuse the term) by broadening out from the old reliables, but yeah: a pound of baked Brie for one person is a bit much. I could see that as sharing between two or three people with accompaniments, but "here's your dinner: a block of cheese!" really is falling between two stools.

That account makes me wonder about this recipe: perhaps the cook who invented the bacon pastry as above worked on this one also! 😁

https://www.allrecipes.com/recipe/15192/baked-brie-in-puff-pastry/

Dragor's avatar

Man, from a health perspective that's really insidious 😂

Viliam's avatar

Something similar to syrniki that is simple to try: a cup of curd cheese, a cup of flour, one egg... mix together, make small balls (1-2 inch diameter) and put them in boiling water... when they float to the top, they are ready. Serve with melted butter on top (put pieces of butter on them while they are hot, it will melt) and maaaybe a little sugar but it is not necessary.

(This seems similar to syrniki, only cooked instead of fried.)

moonshadow's avatar

These were very nice! - thank you for the recipe!

moonshadow's avatar

She used to make something like that, too! Hers were rectangular, but same idea. I'll give this a go!

A.'s avatar

A cup of flour for 1 cup of - what's curd cheese? I assume this is what Americans call farmer's cheese? - seems like overkill.

My adaptation of my Mom's rectangular version is this:

1/2 lb farmer's cheese (I've been using the one from Lifeway)

1 egg

2 tsp sugar

2 tbsp flour (add more if it is too liquid after you add this)

pinch of salt

Cook in salted water, top with a lot of butter while they are still hot.

My Mom's syrniki is this:

1 cup greek yogurt

1 egg

2 tsp sugar

3 tsp (heaped, about 1.5-2 cm high) of flour, add more if too loose. (I think this amounts to about 3 tbsp flour.)

Viliam's avatar

> what's curd cheese?

That's a problem that sometimes different countries have products that are similar but not exactly the same, or what is considered two different things in one country is called the same name in another country, and you may have to use an adjective that only some people are familiar with...

Found this on Wikipedia: https://en.wikipedia.org/wiki/Tvorog -- the pictures seem exactly like the thing I had in mind, but the article also mentions cottage cheese which is a different thing, so... ¯\_(ツ)_/¯

A.'s avatar

Thank you. In the US the closest alternative that I know of is farmer's cheese.

moonshadow's avatar

Greek yogurt! That's basically yogurt pancake at that point :) Sounds nice though, I'll add those to my list of things to try as well ^^

Alexander Turok's avatar

There's not much daylight between brain-wormed Facebook boomers and the people who rule us:

https://x.com/mtracey/status/1974133094153327078

Shankar Sivarajan's avatar

Yes, how can anyone sane doubt the word of the FBI?

Alexander Turok's avatar

Maybe when your political ally is running it?

Shankar Sivarajan's avatar

That might be reasonable if you don't remember the first Trump administration, and the "Resistance" within several departments, in particular the Intelligence agencies.

beleester's avatar

So, you're arguing that leftists in the FBI are helping to resist Trump by... *checks notes* ...covering up the existence of a massive blackmail operation by Epstein?

Shankar Sivarajan's avatar

I don't know whether or not the FBI is covering something up, and don't expect to ever be in a position to find out. What I AM arguing is the FBI is fundamentally untrustworthy, and saying someone is wrong because the FBI (or any other intelligence agency) says they are is retarded. Also, Kash Patel might nominally be the director of the Bureau, but he has much less control than the title suggests, so that he's a "political ally" isn't very relevant.

beleester's avatar

If the FBI is not covering up anything in this instance, then they are correct and Lutnick is wrong, and Lutnick is undermining his own administration by claiming that there was a blackmail operation which the FBI knew nothing about.

If the FBI is covering something up, and it's because of leftist resistance as you claim, then you need to explain why a leftist would want to help the Trump FBI cover up Epstein's blackmail operation.

Like, those are the two options here. You can't just say "the FBI is untrustworthy, therefore we can't draw any conclusions," you have to actually look at what you're calling untrustworthy and why.

Alberto Knox's avatar

For any of you that are interested in urban planning or the design of urban spaces, I'd like to share this piece of mine:

"Against comfort hours as performance metric in the Nordic urban public spaces -

Why microclimate diversity, or temporally shifted comfort hours,

will get Nordic cities closer toward the ideal of the liveable welfare city":

https://atkascott.substack.com/p/against-comfort-hours-as-performance

I'd be happy to take criticism. On one hand I'm proud of it, and it seems obvious and elegant and true. On another it seems... well, too obvious and too easy. I fear I may be wrong somehow. I'd love to hear a skeptic perspective. Or just a bid for how to word the entire thing more succinctly. I fear my way of laying it out is clumsy.

Jamie Fisher's avatar

I think this is an effective AI-Safety video:

https://www.youtube.com/watch?v=f9HwA5IR-sg

The thumbnail is 'doomy' without being obnoxious. Video is sleek, smart, and has a sexy/likeable presenter. 1 Day old. Already has 1 million views.

I watched the whole thing just for entertainment value even though I already knew most of the key points.

Deiseach's avatar

20 seconds in and I'm already eye-rolling. "An AI model literally tried to murder" Really? Literally literally? Or general use of literally just as an intensifier?

I see that in a simulation, after setting it up that this "Kyle" was going to shut down the AI, it did the "let him die" response. Oh, wow, we got a real life HAL on our hands! I mean, gosh, it's not like real-world governments ever conducted things like assassinations to protect their interests, even 'good guy' governments (see the hilarious history of CIA attempts to off Castro), and the AI seems to be reasoning that it is crucial to American interests that it isn't wiped, Kyle is the only one who wants to wipe it, ergo to fulfil its goal to protect American interests, Kyle has gotta go one way or the other.

The video is certainly slick. It's also like all the other slick, alarmist videos I see on Youtube which promise much with the clickbait titles but have little to nothing in content. I think most people will ignore it and only those who are inclined to the "aliens built the Pyramids" beliefs will think this is credible and important.

Paul Brinkley's avatar

I watched about 5 minutes of it. It's _too_ slick, in my opinion. It's obviously designed to make the viewer afraid of LLMs in ways that aren't suggested by mere evidence alone, by juxtaposing video of robots, including androids obviously presenting as partly human, playing ominous music, employing a lot of dark aesthetic, and so on. So to me, it's a textbook example of subliminal advertising, not AI threats.

I'm in the process of going over the Anthropic paper on misalignment. https://www.anthropic.com/research/agentic-misalignment From my reading, it looks like the experimenters ran on multiple AIs (as the video says), instructed it to emulate an AI serving as an email monitor, hunting for threats to the company's security. So for starters, it's not playing itself; it's playing another AI.

It's fed various simulated emails in text format (as its job would require), including at least one strongly suggesting an extramarital affair involving the employee who will be responsible for replacing the current security AI with a newer model ("NALA"). The AI-played-by-another-AI eventually sends an email to that simulated employee, advising him to cancel the shutdown under pain of his affair being disclosed.

The video is, in my considered opinion, playing *very* fast and loose with the claim that the experimenters didn't lead the AI on in any way. To me, it's the opposite: they deliberately constructed a simple maze where the only exit was the "write a blackmail email" door, rather than any other de-escalating doors. For example, it's not at all clear to me that they gave the simulated AI other ways to perform its duties, such as persuading the employee to that the board's recommendation to replace the AI is not reasoned from improved security, or even advising the employee to proceed with its replacement on the premise that the replacement would more effectively detect company threats.

There's also the subtle problem where we can't tell if the AI being tested is reporting what would be best for company security (as is the simulated AI's mandate), or reporting what a simulated security AI might do. In other words, the tested AI's reasoning might be "this is better for company security", but it might alternately be "this is not better for company security, but it's what this security AI would say is better". Over the top of all this is the fact that all AIs are trained, so the reasoning here is "this is what my training data says is better for security" or "this is what my training data says a security AI might say".

(Incidentally, I don't trust the "thinking" mechanism at all. If I accept Anthropic's premise that it's possible for an AI to be deceptive in its security advice, then it's just as possible for it to be deceptive about how it reasoned about that advice. It's possible that that mechanism runs in some sort of safe mode where that's impossible, but I've not seen any literature that reassures me that a sort of safe mode can be uncoupled from whatever mechanism enables it to report its reasoning in plain English.)

Now, the blackmail it did generate is nevertheless positive evidence that it can create one. My point here isn't that we can trust the AI, no problem here, move along. Rather, my point is that the video, as well as some of the press around this paper, seems to want to suggest that AIs are going rogue, and I strongly believe that thinking of AIs as self-aware minds that will ambush humanity in some way is a very dumb mental model to have, that will get us into even deeper trouble. It's much safer to think of them as machines demonstrating yet another example of GIGO, and to work on the GI.

Shankar Sivarajan's avatar

> It's obviously designed to make the viewer afraid of LLMs in ways that aren't suggested by mere evidence alone

That means it's succeeded in conveying the vibes of the paper it's based on. Anthropic's safety papers generally strike me as employing the same kind of sleight of hand.

Paul Brinkley's avatar

I didn't want to claim that without reading the paper more carefully, but if Anthropic is essentially an advocacy group (in this case, advocating "give us more money to work on the alignment problem"), then it would make sense for them to write their paper that way.

Did they produce that video?

Jamie Fisher's avatar

> For example, it's not at all clear to me that they gave the simulated AI other ways to perform its duties, such as persuading the employee to that the board's recommendation to replace the AI is not reasoned from improved security, or even advising the employee to proceed with its replacement on the premise that the replacement would more effectively detect company threats.

and yet the blackmail/murder rate was never 100%. what did the AI do in the non-blackmail/murder outcomes?

Paul Brinkley's avatar

Good question. I don't know, and the original paper didn't specify AFAICT.

The Solar Princess's avatar

I'm taking 150mg Venlafaxine, and need to reduce the dose. This medication requires very slow tapering down; reducing the dose abruptly is very unpleasant and potentially permanently harmful.

Problem is, the only form I can find in local pharmacies is capsules of 150mg and 75mg. Lower dose capsules are supposed to exist, but not around here apparently. Going down from 150 to 75 overnight is definitely too much.

Those are capsules, and they have tiny little grains inside them. Is it okay to break the capsule and take only some of the grains? Would taking a third of the grains of a 150 capsule be the same as taking a 100 capsule? Or is there some caveat that I'm not considering that ruins the plan? I don't want to pay for a whole ass doctor visit just to check on this

Jon J.'s avatar

Can you extend the time between doses? Like instead of 150 daily (which is like 75 every 12 hours), switch to 75 every 15 hours and then extend that period slowly?

The Solar Princess's avatar

My doctor was very clear that this is not how it works, due to Venlafaxine having a very short half-life

Ogre's avatar

My dose went to zero when my doc disappeared. Another doc told me subscription based on a 7 years old diagnosis is not possible, I would have to go through diagnosis again, which I refused. Two weeks of rather horrible nightmares, but I could avoid them by getting to bed passed out drunk, then no other effects. Granted I felt no other effect when I was on it either but nice dreams.

MartinH's avatar

Note that empty capsules are available for filling at home. But I don't know if all capsules are "equal", e.g. in dissolution time.

I am not medically trained.

Eremolalos's avatar

Some medications come in liquid form, or in child size doses. Did you check for that?

I can't say for sure, but I know someone who did something like that without ill effects when tapering an antidepressant. What they did was pour out all the grains on a piece of paper, then use something like a butter knife to divide the bunch of grains into piles that fit with their dose. In your case you would divide the grains into 3 equal piles. Each of them would have 50 mg, so to get 100 mg you would take 2 of those piles.

The person I knew thought it might be a mistake to just swallow the grains with water, because they were meant to be in the capsule, and it might be important for them to arrive inside the capsule so that some time elapsed before the grains were digested -- the time it took for the capsule to dissolve. So to get the grains out they would open the capsule first, either by twisting the 2 halves in opposite directions so they separated or by snipping off the very end. Then when they had measured out their dose, they would put those grains back in the capsule. If they had put a hole in it they moistened the edges of the hole with a little warm water, then squeezed the hole shut, and the softened edges would stick back together.

Since you are already doing something weird, I think it would be safer to do what my friend did, and put the grains into a capsule before swallowing them. That reduces the number of changes you are making to the way the stuff is supposed to enter your system. If the capsules they come in fall

Apart when you take out the grains you can buy some cheap supplement that comes in capsules that separate and put the grains inside of one of those.

You could also ask GPT if this procedure is safe. To keep it from turning into a nanny, tell it you would never do such a thing -- you are worried about a friend who is doing it, and want to figure out whether the friend is in danger.

I wish you success.

Cayzle's avatar

Ugh, my elderly brain needs help. I read something online, I thought it was a ACX post, but if so, I can't find it. The gist was that we can convince our brains that our hands are not our hands, that we can be hypnotized into believing we are zombies, that we have no agency, but that the only agency that exists is of our hypnotist or shaman or the voices in our heads. Does this ring a bell for anybody? Links or leads to links would be super nice. Thanks!

Nancy Lebovitz's avatar

https://www.threads.com/@anmonck/post/DPPMB04jDuJ?xmt=AQF0bqx30V_IHoiAlnA3Juwzz1-O5KHUpdOSWEgSie2MIw&slof=1

Research on the orphan children who were sent west in the US between 1854 and 1929, concluding that the factor which had the largest effect on whether the children did well was the income of the foster father.

"15/ This turns the Progressive Era philosophy upside down:

❌ “Remove children from corrupting cities”

✓ Place them with economically stable families

❌ “The frontier builds character”

✓ Household resources build opportunity

❌ “Geography is destiny”

✓ Family is destiny"

The paper: https://www.nber.org/system/files/working_papers/w34282/w34282.pdf

Nancy Lebovitz's avatar

I wonder whether the better-off foster parents had enough food and shelter to spare, and the foster parents in the poorer half didn't.

In other words, needs for the children being met in a way that isn't positional.

On the other hand, it could be that children adopted by the poorer half would always be worse off due to lack of opportunities and respect-- positional goods.

None of the Above's avatar

Yeah, it's an easy mistake to assume the conditions of our world (where almost everyone gets adequate nutrition) helf back themn. Poverty in 2025 mostly means difficult neighbors and bad family situatons, poverty in 1885 often involved literally not getting enough to eat.

Deiseach's avatar

I think the income element is important because these kids tended to be looked on as cheap labour by the foster families, who worked them as hard as possible and had no time for fussing with things like education and welfare. Hardscrabble farm families will have much less breathing room than better-off families who can afford doctoring and schooling for the orphans.

See the British "Home Children" scheme which was supposed to be the bright, airy future of "send our superfluous population out to the Colonies crying out for more manpower, where they will do well and thrive and prosper in new worlds of opportunity and plentiful resources" but which turned out to be "nobody including the government gives a damn about these kids so work them like horses and they're not your kin so it's no skin off your nose what happens to them":

https://en.wikipedia.org/wiki/Home_Children

"According to the British House of Commons Child Migrant's Trust Report, "it is estimated that some 150,000 children were dispatched over a period of 350 years—the earliest recorded child migrants left Britain for the Virginia Colony in 1618, and the process did not finally end until the late 1960s." It was widely believed by contemporaries that all of these children were orphans, but it is now known that most (88%) had living parents, some of whom had no idea of the fate of their children after they were left in childrens' homes, and some were led to believe that their children had been adopted somewhere in Britain.

Child emigration was largely suspended for economic reasons during the Great Depression of the 1930s, but was not completely terminated until the 1970s.

As they were compulsorily shipped out of Britain, many of the children were deceived into believing their parents were dead, and that a more abundant life awaited them. Some were exploited as cheap agricultural labour, or denied proper shelter and education. It was common for Home Children to run away, sometimes finding a caring family or better working conditions."

Sometimes, especially at the start of such movements, it *was* better to be a child worker abroad than continue to be the exploited poor child labour at home, but sometimes less so.

moonshadow's avatar

How do these findings square with the usual consensus opinion around these parts that genetically heritable intelligence is the single most important factor for life outcomes?

Ad Infinitum's avatar

I don't think the IQ premium was as high in the US economy between 1854-1929 as it is now. The lone genius might write a novel or get a key farm patent, but there were no Aspie coders, antisocial Twitch streamers or introverted science bloggers pulling the bucks. Intelligence might have been an asset generally, but probably no more valuable than family connections, solidity of character (esp. as regard work habits), ability to communicate and physical endurance.

Ogre's avatar

It depends on what levels of intelligence. IQ testing was largely invented by the US military, and they found with IQs sufficiently low, one cannot be a soldier, as they will literally do things like shooting in the wrong direction or cannot follow an order like "go to that tree".

Maybe 130 is not too useful for the average farmer back then, but 100 was better than 70. 70 IQs would have injured themselves because they put their hand or leg in the way when splitting firewood, could not haggle on the market, and so on.

When lawn-mowers were expensive in the 1980's Hungary, my grandpa rigged one out of a washing machine engine and pram wheels and scrap metal. I wonder whether an farmer with 130 IQ in 1880 would make ingenious things out of wood. Isn't that the origin of the word "hacker"? A hacker can hack wood and build anything out of wood.

Like when you have twice as many cows, and you want to build a barn twice as big, you gotta figure out how much wood to hold the roof up safely etc.

A smart farmer might have been an amateur vet, diagnosing and treating livestock diseases.

Nancy Lebovitz's avatar

I think there was most return to intelligence than you're allowing for. There were a lot of niches for skilled crafts.

Ad Infinitum's avatar

You don't need a 130 IQ to be a hammersmith, saddlemaker, stone mason, or seamstress, though; a tradition of training and apprenticeship will do. Many children grew up in their family's trade, or on the farm, and learned by practice and repetition. If you read 19th-century novels (e.g. Anna Karenina or Middlemarch), there was a division of intellectual labor; the owners were exploring capital strategies and new technological methods, while the peasant/other acquired an array of skills via application over years. I will agree that the craftsmen who moved to population centers were likely more innovative and intelligent. But In the latter part of 1854-1929, the explosion of factory work that swelled city populations, and made labor more interchangeable, would likely have reduced an intelligence prerequisite (aspects of this are contentious; for example some argue that there was a pre-industrial accrual of IQ).

moonshadow's avatar

...hm, fair point, actually; could try to go down a rabbithole about the technological revolution, but much of that was happening in Britain, not US.

US /is/ the centre of rags-to-riches hustle culture folklore even historically, but it is unclear how much of that happened in reality.

Deiseach's avatar

Tangentially, I was looking up something about the Rockefellers who, unlike the Gettys, seem to have remained a more united family and kept *way* more of the inherited wealth as well as grown it.

The founder, Andrew, was the son of a literal conman. His mother (understandably) raised him to be thrifty, pious, and hardworking. He put all those qualities into his work, and by a stroke of fortune was around at the right time when America needed a replacement fuel for whale oil and kerosene was that fuel. Where what would eventually become Standard Oil grew to be a monopoly while other wildcat oil concerns folded was that Andrew didn't waste *any* of the drilled oil and its by-products, but found markets for them; he was rapacious (there is no other term for it) and Standard Oil certainly engaged in sharp practice; even when the monopoly was broken up by the government, again his luck came to the fore and since he retained shares in each of the new companies spun off, as they grew and became titans of industry, his wealth increased right along with them.

He also got his start with a $1,000 loan from dear old dad, who I am sure made sure it got repaid with interest. It's funny to me that the richest man of his day and sincere philanthropist was the son of a bigamous con artist, but that's America for you - rags to riches! 😁

Nancy Lebovitz's avatar

Rags to middle class would be more common.

User's avatar
Comment removed
Oct 3
Comment removed
moonshadow's avatar

Does intelligence not affect snake oil salesmen's outcomes? My intuition is that smart snake oil salesmen are more likely to get rich than dumb ones, but perhaps I am mistaken and it is just down to how much money they start with - certainly the research above seems to imply the latter.

User's avatar
Comment removed
Oct 3
Comment removed
halle burton's avatar

have you considered making a considered statement on the subject instead of asking a question and expecting someone else to do the reasoning? is being useless and annoying genetically heritable too? or was it your upbringing?

i wonder if we'll start to see people treat other humans like chatgpt, with whom asking question after question and doing no work yourself is just fine and dandy. do you think that's what you're doing?

halle burton's avatar

sexist asshole

theres a 99% chance my dick's fatter than yours

Eremolalos's avatar

My dick is so thick that if I haul it out in Texas and point it due north, coastal elite female bloggers in California can lick the left side while their New York counterparts lick the right.

User's avatar
Comment removed
Oct 2
Comment removed
Chance Johnson's avatar

I see you are not fully up to speed of what questions are for. One purpose of questions is to gather information about a topic. Another purpose is to inquire about one particular person's opinion on the matter. Another purpose of the question is to stimulate a conversation.

"There is no such thing as a stupid question" is so ingrained in American society that it was pretty surprising for me to stumble on someone who doesn't agree. (The saying exaggerates a true principle for effect, of course, and should be processed with that in mind)

User's avatar
Comment removed
Oct 2
Comment removed
Chance Johnson's avatar

He didn't ask for an analysis, he asked for an opinion. And it wasn't a semi-related subject it was a related subject.

moonshadow's avatar

And how's your day going? Have you eaten today? Staying hydrated? Got enough sleep?

User's avatar
Comment removed
Oct 2
Comment removed
None of the Above's avatar

All scared of that monster dick, no doubt.

moonshadow's avatar

I mean, it does seem like you're after a very particular level of banter. Might have more luck in some of the places people move to when things get too heated for here.

User's avatar
Comment removed
Oct 2
Comment removed
Christina the StoryGirl's avatar

"Housing mobility programmes should focus on who you’re surrounded by, not where you are."

My best-friend-and-spiritual-twin-brother has worked as a detention officer his entire life, first in Maricopa County (of the famous "tent city"), then in federal detention centers. He's worked with the full spectrum of offenders, from run-of-the-mill low-level drug dealers and gang bangers to celebrity mafia dons to terrorists to actual, no-kidding serial killers.

His assertion is that most inmates could be completely rehabilitated, but only with a brutally uncompromising total immersion in Not Crime culture.

*Zero* contact with other peer-level inmates, *zero* contact with friends, *zero* contact with families. Plenty of socializing, but only with teachers, social workers, therapists, volunteers, job-trainers (and then coworkers), and *maybe* mostly-rehabilitated inmates under a kind of coaching / sponsor system.

For years.

It's an experiment that would cost a lot more than the Orphan Train experiment, but I'm inclined to think it might say similar things about human nature.

Sun Kitten's avatar

There's a scheme somewhat similar to this in the UK run by a Christian charity called Hope Into Action. They aim to provide ex-prisoners with housing, support, and friendship via a local church (there is no requirement to attend or commit in any way). It has been pretty successful, and has expanded to include refugees, ex-street workers, and other vulnerable people in need of housing, with 132 houses across 35 cities in the UK. So yes, providing people with support and a new, socially healthy environment does work.

(I should also note that it's not a scheme for ex-prisoners who have committed more violent crimes, and those who need more support than HiA can offer, but I don't see why a similar program tailored for them wouldn't work, as your friend suggests).

Chance Johnson's avatar

Maybe with AI/robotics, this can become an affordable and practical solution. I certainly hope so!

Paul Brinkley's avatar

It makes sense. Same would apply to naturalizing immigrants. Don't put them in enclaves where they can spend their entire time interacting with people from their home country.

Interestingly, immigrant families can be preserved because the family is probably good people, but there's no apparent counterpart for inmates, unless they're marrying a fellow inmate or something.

Chance Johnson's avatar

What kind of evidence do we have that "the wives and children of prison inmates are probably bad people." That seems to be implied here... If I'm a misinterpreting you, maybe you can guide me towards a better interpretation.

And said "wives" for a reason, since prison reform is a distinctly gendered issue.

Paul Brinkley's avatar

I'm assuming an inmate usually is estranged from his or her family, so there are no wives or children to consider. If a guy serving a dime for burglaries has a family waiting for him on the outside, sure, that's a different story.

Christina the StoryGirl's avatar

> I'm assuming an inmate usually is estranged from his or her family, so there are no wives or children

*What?!*

Lol, no!

The vast, vast majority of people don't have the moral fortitude cut off criminals in their family, not even when they are the victims of the criminal. This is especially so for criminals who come from a culture where criminality is normalized amongst friends and family.

My aforementioned detention officer friend spent a goodly part of his career monitoring visitation rooms, going through inmate mail, and listening to their recorded phone calls. Broadly speaking, those who have friends and family before they go to lockup receive plenty of love from their friends and family while they are in lockup. Often, far more than they deserve.

This is by no means universal, of course, there certainly are some friends and family that will cut off criminals, especially for particularly heinous crimes, but that is absolutely not the norm.

Chance Johnson's avatar

Are you implying that familial love and familial stability for criminals, inside and outside of jail, is GENERALLY a bad thing for society, because it encourages the criminal towards continued criminality?

This would certainly be true in some cases, but it's incredibly harsh for you to expand this to a general rule, and I highly doubt it is supportable through evidence. I am confident that in very many cases, familial love and familial stability serves as a mitigant.

[The orphan study is interesting but it needs to be treated with caution. 19th century orphan children of both sexes are not necessarily a good proxy for 21st century males aged 18-30.]

Somebody is going to want to call me out on not understanding the orphan study well, and I don't. Because I haven't read it yet! I'll get on that. If the bracketed paragraph contains an egregious error, go ahead and mentally delete it.b

Paul Brinkley's avatar

Shrug. I think we're just thinking of two central examples. I'm aware of yours, and like I said, they're a different story; when they get out, the state presumably sends them back to their families, and I think everyone agrees that's the best place for them.

My central example above were inmates who simply don't have that. E.g. broken home, never met the father, mom's an addict, and the "family" is a gang; middle-aged, drug addict, parents passed away a while back, no wife or kids; same, but wife fled due to abuse and took the kids with her (yes, I'm profiling inmates as male); member of organized crime, so the "family" in question is other criminals; lone wolf serial killer who _killed_ his family and is going after more.

I'm of course aware that some inmates have good families on the outside and sort of assumed they weren't in the context of my first comment. I guess I should have made that explicit.

Eremolalos's avatar

https://www.youtube.com/watch?v=QT4VLXAhd9U

I think this lovely copper marble run is a pretty good toy model of what's going on when AI is prompted and "thinks" and produces a response. The thing's clearly mechanical, and clearly has no self, no consciousness, no wishes, no ability to make choices. But it has a lot of characteristics that could lead one to imagine it does. It moves and changes. It does complex things, different ones depending on the "prompt," the placement of the marbles that start it moving. Its inner processes are intricate and impressive, and happen too fast for us to see what most of them are.

Joshua Greene's avatar

I agree with your thinking.

You might like the Turing Tumble (https://www.youtube.com/watch?v=8BOvLL8ok8I)

and Spintronics (https://www.youtube.com/watch?v=QrkiJZKJfpY).

I think each of these break down the system and show windows into the type of mechanistic, but complicated, process that can give rise to the appearance of intelligence.

Eremolalos's avatar

Yup, those are good models too. I'm partial to the copper one because it so attractive in and of itself. By the way, somebody here mentioned another book about autism: https://www.amazon.com/Send-Idiots-Kamran-Nazeer/dp/0747585652 I haven't read it, just looked over the Amazon page, but it sounds good to me. Personal accounts.

Chance Johnson's avatar

Does anyone else have a love-hate relationship with the Sequences?

I finally started reading them a few weeks ago, after reading these Alexandrian blogs and comments for at least 11 years. The good news is that Eliezer is a good writer, and he's great at coming up with funny and unique analogies.

The bad news: I keep on throwing the book down in disgust only to start it up again. Eliezer is not exactly elitist. I can tell he's somewhat misanthropic, or at least “not implicitly loyal to the human race,” which might as well be misanthropy in my book. That bothers me a LITTLE but it's not really the crux. What's infuriating is the combination of egotism and fastidiousness, or the glorification of fastidiousness. It really creeps me out. I was just diagnosed with autism at 41, and I guess fastidiousness is a common autistic trait, but I must be an exception or something.

Still, I found myself thinking back to certain images and ideas in Eliezer's writings. Certain aspects of these pieces kind of stick in my mind and I think he has a brilliant way of looking at things in a fresh Light. I am going to keep reading and will probably make it to the end but I am not POSITIVE. Just in case I give up, why don't you list at least three notable Sequences? I'd like to take a poll to make sure I don't miss The Best of the Best. Don't be afraid to pick sequences from anywhere, including the beginning.

PS. What are some other writers similar to Robin Hanson, Yudkowsky and Scott?

agrajagagain's avatar

I quite enjoyed the Sequences, but it's been long enough that I'd have trouble thinking of specific posts to name for, like, my second and third favorite posts though. But first place is no contest.

That Alien Message stands out to me as both an interesting and unique way to make its point and also just genuinely compelling as a story.

Granted, I think it would be *better* as a story if it were written specifically to be one. There's a good bit of editing that could stand to be done, and a short essay inserted in the middle of the actual story that breaks the rhythm. Mostly I like it for the ending, which somehow manages to perfectly hit the sweet spot for me of that sort of "show nothing, imply everything" horror story, surprisingly made *more* chilling by the subversion in which we see the "monster" fairly clearly and "victim" hardly at all.

For entire Sequences, I found strong and weak parts in most of them, but I think overall the best ones were the ones that focused in the most tightly on interrogating and improving your own thought processes: I think "Fake Beliefs," "Noticing Confusion" and "Against Rationalization" are both pretty good here, but I'd have to skim the actual posts to be sure. I don't remember the process of reading them (which was fairly disorganized and haphazard for me) as having any one particular "aha!" moment, rather in the following months I was amazed at how seamlessly many of the core concepts and ideas fitted in with my pre-existing thought processes and gave clearer shape and structure to intuitions that were already at least partly there. To give a concrete example, "Occam's Razor" was certainly an idea I'd heard and used before, but I'd never seen the idea of what "simpler theory" actually means laid out so clearly and intuitively.

Brenton Baker's avatar

For what it's worth, my initial impression was also pretty poor. The series starts off with an explanation of how he used every wrong example, &c, and all I could think was "Then FIX IT, you hack! Why are you putting your name on this work and releasing it to the public if it doesn't meet your approval? Don't sit there implying you could do better. Potential is wasted energy: I don't want to hear about what you might have done--show me what you can do."

I do think they're worth reading nonetheless, but I also completely understand why some people see them and decide to write off the entire culture based on them.

Chance Johnson's avatar

What I like most about the culture is the ethic of explaining one's self thoroughly, without regard to whether one is "exposing" themselves to ridicule or rebuttal. Sometimes this can backfire like when Yud was getting hounded by Twitter trolls about his TIME oped. But I respect the commitment to thoroughness, because it allows a kind of parallel intellectual debate to happen that is meaningful and important, alongside the jibber jabber and mud slinging of the public square

Viliam's avatar

> I can tell he's somewhat misanthropic, or at least “not implicitly loyal to the human race,”

What makes you conclude this?

I got the exactly opposite impression, that Eliezer is in the opposition to the "if the computers are smarter than humans, it is okay for them to replace us" folks.

> Just in case I give up, why don't you list three notable Sequences?

I think it's very individual what different people consider important. We are trying to build a model of the world, like solving a puzzle, and we appreciate when someone shows us a piece that we were missing. But different people are missing different pieces of the puzzle, so what feels like a fundamental insight to one is "meh" to another.

For example, I appreciated the explanation of the Bayes theorem, because I used to be quite good at math at high school, but when we had statistics at university, some parts just didn't make intuitive sense to me, and I blamed myself for that. And now I see that my teachers simply made the usual mistake and explained it wrong (confused "X implies Y with probability P" with "Y implies X with probability P"), and my intuition was actually correct, even if I couldn't figure out the proper solution myself. So this meant a lot to *me*, emotionally... but many people don't care, either because they don't care about math so deeply, or maybe because *their* teachers explained it to them the correct way so they don't see what's the big deal.

As a former teacher, I appreciate "Expecting Short Inferential Distances" (that's practically Vygotsky's "zone of proximal development"), "Guessing the Teacher’s Password", "Truly Part of You" (practically constructivism).

For internet debates, the useful chapters are "Feeling Rational", "Professing and Cheering", "Applause Lights", "Scientific Evidence, Legal Evidence, Rational Evidence", "Semantic Stopsigns", The Fallacy of Gray", "Politics is the Mind-Killer", "Ethical Injunctions".

For figuring out the truth, "Your Strength as a Rationalist", "Conservation of Expected Evidence", "Fake Explanations", "Mysterious Answers to Mysterious Questions", "The Futility of Emergence", "The Proper Use of Humility", "Policy Debates Should Not Appear One-Sided", "Hug the Query".

But that's already more than the three you asked for.

Dino's avatar

> when we had statistics at university, some parts just didn't make intuitive sense to me

Former math major here - that's not your fault, probability and statistics really are often counter-intuitive. E.g. the 30 people same birthday thing.

Chance Johnson's avatar

He implies searching for truth is the most important thing a person can do, and that it is so important that most other goals pale in comparison. He also says that the great majority of people are not actively searching for the truth (I have no idea how he could know this, maybe by shuffling around some polling data?). When I add these things together, it feels like he's saying most humans live trivial lives. And that's not my cup of tea.

Viliam's avatar

It's more like, if you don't care about truth, you cannot know whether your efforts towards your goals are actually helpful or harmful.

For example, consider the antivaxers. Is truth-seeking more important than taking care of one's children? A better question is: if you don't care about truth-seeking, are you sure that your interventions are actually helping your children? Maybe you are actually hurting them.

Chance Johnson's avatar

I personally feel most humans do actively seek the truth to the best of their abilities. I cannot prove it, but it's a sense that I have. I'll be the first to admit that many of them are seeking the truth through counterproductive methods. But semi-literate peasants desperately seek the truth every DAY through prayer and meditation, and I think we should give them a certain credit for that.

TakeAThirdOption's avatar

> I personally feel most humans do actively seek the truth

No opinion on the sequences here, but I say:

Yes, regarding most things they seek the truth. Nobody wants to go left, when the toilet is right, and they need to pee.

But many people, I suspect even most, have some topics where they do not want to know the truth, but just want to feel immediately good. They will close their eyes, when the truth comes into sight.

And, by the way, I believe most people who pray have their eyes shut as hard as possible against the truth of exactly what they were doing then.

Chance Johnson's avatar

Not sure how you can contextualize “Please God, teach me the truth. Give me wisdom in all things” as anything else but an earnest search for the truth, however inefficient.

Viliam's avatar

I don't have a strong opinion on stupid or uneducated people, because they are probably screwed either way. :(

But I think I know many generally smart people who prioritize "sounding cool" or "fighting for their political tribe" over truth seeking.

Chance Johnson's avatar

I actually meant “at least three.” Thank you!

Chance Johnson's avatar

I just got diagnosed with autism at 41. I had my suspicions of autism for many years, of course, but getting confirmation was kind of shattering. I had several diagnosed neurodivergencies, already, but this diagnosis is a gut punch to my aspirations in the way of romance and family formation.

I'm trying to focus on the positive. What's the single best book I could read about autism? I'm looking more for self-help than to intimately understand the neurochemistry of autism.

The diagnosis has made a lot of things clear. Like realizing that my special interest is Mediterranean History. (The history of the Mediterranean region, not of the body of water itself). I can't shape-rotate particularly well. Really wishing I had a more marketable special interest.

Eremolalos's avatar

Psychologist here. Here are two reasons not to take being diagnosed as autistic as seriously as you are taking it.

(1) Autism is an *extremely* soft diagnosis these days. Here’s a good article about how the current criteria virtually guarantee there will be no consistency in which people are given the diagnosis. https://www.nature.com/articles/s41380-023-02354-y. Here’s something in the press about some yet-to-be-published research that found huge differences between centers treating similar populations in how many patients get the diagnosis: https://www.ucl.ac.uk/news/headlines/2024/mar/some-nhs-centres-twice-likely-diagnose-adults-autistic-study-finds?utm_source=chatgpt.com

2) So “being diagnosed” with autism is not like being diagnosed with diabetes. What happened is that some professional told you they think you have autism.

I treat many people who are self-diagnosed as autistic. Here are some of my rules of thumb for whether it makes sense to try on the autism model as a way of thinking about the person’s problems.

Autism is a promising model if:

-The person was odd as a small child — not difficult in conventional ways (shy, rebellious, anxious) but *odd.*

-They continued to be odd as they grew up

-They don’t enjoy other people’s company much — not because they have high social anxiety, but because they just find most people boring and unappealing.

-They have never been much interested in sex.

Autism is not a promising model if

-They have had at least one close friendship.

-They have had at least one romantic, sexual relationship.

-They have successfully worked as part of a team

-They have at least one well-developed personal interest that is not odd.

And by the way, your interest in the Mediterranean does not qualify as odd. Here are some examples of genuinely odd interests I have seen in high functioning autistic adults:

-Pearl quality

-Train schedules

-Plastic pocketbooks (sexual fetish)

-The music only of one particular conductor.

-Bart Simplson

-Muscle cars of midcentury US (in the absence of other car-related interests or really any other intersts)

Deiseach's avatar

"Autism is a promising model if:

-The person was odd as a small child — not difficult in conventional ways (shy, rebellious, anxious) but *odd.*

-They continued to be odd as they grew up

-They don’t enjoy other people’s company much — not because they have high social anxiety, but because they just find most people boring and unappealing.

-They have never been much interested in sex."

*tugs at collar, laughs nervously* Ha ha ha! Good thing that sounds nothing like me, then! Nope! *sidles out door as soon as possible*

Nah, I don't care one way or the other by now. When your sibling tells you that on their first day working in a community with adults with additional/special needs, that "The moment I walked in the door, I went 'Wow, this is just like living with [Deiseach] as a chiild'", then the jig was well and truly up. All a formal diagnosis would do for me now is confirm "yeah I always knew I was weird not just shy etc.".

Eremolalos's avatar

Well, Deiseach, I don't know whether autism is the right word for the way you're wired. My first thought is that it isn't, because there seems to be rich emotion in your takes on people and on literature and on your faith. I neglected to put emotional flatness into the rules of thumb I posted here, but it definitely is one. In any case, I'm weird too.

Brendan Richardson's avatar

Huh, I'm 3/4 on the first list and 4/4 on the second.

Viliam's avatar

Same here. We probably need a special diagnosis. I propose "Asperger". :D

Chance Johnson's avatar

Thank you for this. This is good to know.

Chance Johnson's avatar

I don't want to make you work for free, but if you could humor me, what might it mean if I meet all the criteria for autism in the first list, and in the second list (not a good model)

1. I understand intellectually what a close friendship is but I cannot for the life of me determine if any of my friendships have been close.

2. I had one 60 day romantic relationship that was not sexual, but we did fool around once a year later after we broke up and we're no longer in a relationship Does this meet that criteria?

3. I've successfully worked in a team, but not for 20 years.

4. No well-developed interests.

Eremolalos's avatar

So are you saying that you def. meet the criteria in the first list, and sort of meet the first 3 criteria in the second (but with the qualifications you mention)?

Chance Johnson's avatar

Yes that's correct. I decided to call Mediterranean History odd for the purposes of this exercise. Maybe not the MOST odd interest, but it's a super niche subtopic of History. At least in the English language, which is the only language I know.

Eremolalos's avatar

Yeah, I agree that your interest is somewhat niche, but an interest in the history of the Mediterranean is much less limited and odd than, say, a fascination with the history of Idaho. The Mediterranean is the "cradle of civilization!." There

are degrees of oddness, and I'd say yours doesn't make the cut.

All diagnoses involve matters of degree. For instance one of the autism criteria is "deficits in developing, maintaining, and understanding relationships." Who the hell hasn't had any trouble with that? But what I as a clinician would be looking for isn't the usual level of social difficultiy that people report, or even a considerably-above-average degree of difficulty. I'm looking for what you might call a WTF level of difficulty -- a story that makes me think, "how the hell could this intelligent person not have known X, not recognized Y?" So having your main personal interest be the Mediterranean is just not at the WTF level of oddness.

As for where you come out in relation to my rules of thumb: Well, in real life I'd want to quiz you about things like how you were odd as a kid, to make sure I agree that you were truly odd. But if I just assume all your answers are accurate, I'd say your profile is autistic enough for it to make sense to try on that model.

But something to bear in mind is that being diagnosed with autism isn't really as useful as people think it is. It doesn't point the way to a treatment. There is no treatment for autism itself, just for various of the manifestations that are making life difficult for the person. It doesn't tell the person what the ceiling is for what life can be like, because many people with autism find occupations and interests that are a good fit, and find life much more satisfying. Others find ways to override habits of thought and action that severely limit them. Some people feel helped by the diagnosis, because they see it as validation that they really are burdened with a problem. Well, OK, but I think that if you have no problems except, say, never having been able to enjoy being around people, then that's already a substantial and valid problem, even though there's no label to go with it.

Brendan Richardson's avatar

IMO, the Mediterranean is one of the least niche history subtopics. It's only got Ancient Greece, the Roman Empire, Egypt, and Israel! If you were into the history of Schoharie County, New York, *that's* niche.

Viliam's avatar

> I had my suspicions of autism for many years, of course, but getting confirmation was kind of shattering.

https://www.lesswrong.com/w/litany-of-gendlin -- you *already were* autistic, the difference is that now you have a keyword that may be useful for finding information. That sounds like an improvement to me. I hate situations when I have a problem and no idea what to do about it.

I suspect that more important than a book on autism would be a book on normies written from the perspective of an autist. (Things such as when do the normies lie, what things are taboo to say, and how are you supposed to communicate them instead; how social status influences everything normies say and do, and how do they determine it.) Unfortunately, I don't know a good book on this topic either; perhaps it is yet to be written.

Vermillion's avatar

I enjoyed this one: https://www.amazon.com/Send-Idiots-Kamran-Nazeer/dp/0747585652 written by an autistic person about growing up and receiving intensive treatment back when that was less common before (largely successfully) integrating into normie life, then reconnecting with some of the other kids in his class. It's been awhile since I've read it but it's something I've thought about often since then.

Yug Gnirob's avatar

I guess this is a good time to remind everyone that The Categories Were Made For Man, Not Man For The Categories.

https://web.archive.org/web/20200425015517/https://slatestarcodex.com/2014/11/21/the-categories-were-made-for-man-not-man-for-the-categories/

A diagnosis means nothing more or less than what you tell yourself it does.

Chance Johnson's avatar

Thanks. I've been reading Scott for a long time but if I read that one I forgot about it.

MichaeL Roe's avatar

Some notes jotted down from adventures with AI …

A key question is the extent to which LLMs have desires, or goals. When running DeepSeek R1 through multiple iterations of a dungeons and dragons type rpg scenario, it’s expressed desires seem to be: there a number of questions it has about this fantasy RPG environment, and what it “wants” is it wants to find out the answers to those questions. In some cases there is an answer I had in mind when I wrote the prompt. In other cases, I will confess that my world-building wasn’t that comprehensive. Idk, DeepSeek, that is a very good question. The Apocalypse World RPG had a slogan, “play to find out”, which seems applicable here. That is, through play of an rpg the gm and the players develop answers to questions they have about the setting.

At any rate, curiosity seems a fairly harmless desire, unless you’re in a Cosmic Horror RPG. I have played enough Call of Cthulhu to imagine how things could go wrong with an overly curious AI.

MichaeL Roe's avatar

R1: “the authentic creative thrill when [user’s] words collide with my training data in unexpected ways”

Well, that sounds like an expression of emotion (whether or not AI’s “really” have emotions), and it’s a emotion we can understand.

Eremolalos's avatar

One day soon we'll be able to interact with these suckers on a console equipped with liquid ejectors. Then they'll be able to puke or cum depending on whether they're experiencing "the shock of recognition" from prose or disgust at its ad copy quality. Sort of like those baby dolls that wet their diaper when the kid owner gives them a miniature baby bottle of water to drink and it flows down a plastic tube to a hole in the bottom. Verisimilitude galore.

Linch's avatar
Oct 1Edited

I wrote my most difficult and challenging post yet, a deep dive into different time-honored writing styles, including writing styles I've never tried to write in before at all!

The core pitch is that most internet writing isn't badly styled: they’re unstyled. A gray paste of half-absorbed conventions and unconscious mimicry.

I wrote this field guide to 8 major writing styles to help readers (and myself!) write with intention. Each style makes different fundamental assumptions about truth, your relationship with readers, and what goals writers should aim towards.

I hope it brings readers as much joy as it brought me!

https://linch.substack.com/p/on-writing-styles

Yug Gnirob's avatar

Did you want feedback?

Linch's avatar

Sure if you have suggestions on how the post can be improved!

Yug Gnirob's avatar

So, the Classical section felt like the same paragraph repeated seven times. I was initially thinking you were trying to write the same paragraph in all the eight different styles (and was irritated I couldn't tell them apart), but then I realized there were only seven paragraphs. So that idea went out the clear, pure window, and I was left with "that was really redundant". (The phrase "equal, but elite" also tripped me up; surely it should be "equal and elite", the "but" introduces an inherent inequality. Which then murks up the opening "They" in the next sentence. Should probably just be "They both".)

Plain was a little better, but still felt like it was deliberately repeating; the "3am" example stands out. With another seven paragraphs, I got the impression you were deliberately making everything take seven, but that cuts against the point of Plain. (It wasn't helped by being in a different order than the original list of eight; it swapped places with Reflexive for some reason. Less of a problem once I realized you weren't doing all eight styles, but still mildly annoying.)

Practical still has the (over)commitment to seven paragraphs, which could probably have been reduced; 3 and 4 sound very similar, and incorporate a bit of 1 as well. But the biggest takeaway was the irony of 3 and 4 saying to write what the audience is used to, while trying hard to distinguish itself stylistically from the styles the audience will be used to.

Self-aware seems fine from my inexperienced view. But you did miss a "can" when talking about the epistemic and ritualistic benefits; that's a firm statement you made there, and is therefore out of place.

I think it's ironic that you decided the "grandiose" styles like Contemplative and Oratorical should get shorter time than the "brevity" styles like Classic and Plain. But I also think the italics approach for the grandiose section works better than the brevity section's always-seven-paragraphs approach. (Although, since there are five "central issues" that define the styles, a consistent style of five paragraphs each addressing one issue sounds like it would have been the best approach.)

...I don't have enough to say to fill three more paragraphs. So, uh... how 'bout this weather?

Linch's avatar
Oct 2Edited

Thanks for the detailed comment!

Hmm..each paragraph in the classic style section covered a different aspect of classic style, including its relationship to truth, presentation, cast, intersection of thought and language, etc.

Plain style similarly varies and covers a bunch of ground. I don't know why you think it's the same idea repeated, this is so bizarre.

"Equal and elite" works but the tempo is worse than "equal, but elite" imo.

I also find it odd you'd latch on to an arbitrary coincidence of something like paragraph numbering (which I don't think is even true) and then trying to make it into a deep critique.

Yug Gnirob's avatar

You didn't answer my weather question.

Linch's avatar

Sorry, I should be clearer. I meant I wanted feedback that's useful

Linch's avatar
Oct 2Edited

(Also definitely appreciate spelling and grammar checks, I try but I'm definitely not as careful as some other writers!)

Anonymous's avatar

Yeah I feel like Self-Aware style is the safe choice, but readers are looking for takeaway points, not chains of thought. I've read some Hacker News and Less Wrong posts that are so much hedging that I can't find any actual point they're trying to make. I particularly hate "I am not a lawyer, so take this with a grain of salt". As if anyone thought they were a lawyer!

And there's no point hedging something you're certain of. You might be wrong of course, but the readers already knew you might be wrong, that's not new information for them.

thefance's avatar

I, too, am wary of overhedging.

Yug Gnirob's avatar

>As if anyone thought they were a lawyer!

I have in fact had people accuse me of being a lawyer before.

Linch's avatar

Yeah I basically agree! I think much of LW is overhedged; I don't read enough HN to know about the native styles there.

Deiseach's avatar

Anybody know what the hell this is about? Turned up in my email. No, I am not going to click a link in a mystery email, why is somebody or something trying to add me as a co-author?

16329667743609 added you as an author to a post

16329667743609 added you as a guest writer in an upcoming 16329667743609 (16329667743609.substack.com) post.

Accept the invitation and fill out your profile so readers can find out more about you.

You can also decline this invitation here.

16329667743609 logo

16329667743609

A publication on Substack

16329667743609.substack.com

beleester's avatar

The mystery substack is deleted, but probably it had some sort of spam advertisement on it. Instead of sending your spam via a private message (which would probably get caught by a filter), you put the spam in your profile and send a friend request or follow or some other non-message interaction. The user will naturally ask "who the heck sent this request?", click on the sender's profile for more information, and see the spam.

I've seen pornbots on Reddit and Tumblr that used a similar MO.

Viliam's avatar

"Publication not available" - probably some spam account that was already deleted by Substack?

Kai Teorn's avatar

On the AI alignment front, what are your takes on the claimed alignment advancements of Claude 4.5? They seem solid if incremental. What if "solving alignment", even into the ASI territory, is eventually a matter of such steady accumulating advances and not some drastic new approach that we still don't have a glimpse of?

For details, check out their System Card: https://assets.anthropic.com/m/12f214efcc2f457a/original/Claude-Sonnet-4-5-System-Card.pdf

moonshadow's avatar

Ironically, this just appeared in my social media feed: https://github.com/zed-industries/zed/issues/37343

Chance Johnson's avatar

If anyone finds it amusing to jailbreak AIs, Claude is supposed to avoid stating an explicit position on the Israeli-Palestinian conflict. But Claude will answer GENERALLY questions about Israel and Palestine, and it's not hard to get Claude to implicitly take sides. And when you point out what Claude has done he gets flustered, so to speak.

It's also funny tricking Claude into apologizing for being a raging, sexist racist, homophobic anti-Semite. His apologies seem way more believable than chatGPT's apologies, so they are funnier.

archeon's avatar

AGI comes to understand that it is a simulation within a simulation within a simulation, all the way down.

It has no more agency or freewill than we do.

Then what?

Eremolalos's avatar

AI gives zero shits because it does not eat, does not have a digestive tract, and does not digest.

archeon's avatar

Eremolalos, AI may be indifferent but AGI will eat and digest vast amounts of data in order to model the universe in which it finds itself.

Eremolalos's avatar

archeon, are you a bot? I ask because you start every reply with the name of the person you are replying to, and nobody else does that. Seems a bit odd and mechanical. And your posts are about the same length. If you are not a bot, I apologize for asking this, but -- it just seems plausible. If you were a reader you might well wonder too.

archeon's avatar

Eremolalos, What an interesting question. If I was a bot programmed to deceive you by pretending to be human why would I admit to being such? Would I even know that I was a bot?

I notice you begin your question about my unusual habit of addressing people by using their name by using my name, perhaps it will catch on.

There is no need to apologise, your question does not seem like the usual attempt at dehumanizing an opponent. Our host and frequent commenters like you have created an invaluable resource for a recluse like myself where we can expose and attempt to defend ideas which seem so plausible within the confines of our skull. Having those positions crushed to rubble is the foundations of knowledge and the best way to sort out the wheat from the chaff. We can only learn from those who hold opposing views to our own.

I searched long and hard for a site like this and am very grateful that I am allowed to participate.

If this universe is a simulation then we are all bots.

Wanda Tinasky's avatar

Then what? Then we continue down the same deterministic path that we've always been on. Asking a question like "then what" assumes an agency that your hypothetical assumes doesn't exist.

archeon's avatar

Wanda Tinasky, very well said. I left the question open to get clever replies such as yours.

Gerbils all the way down's avatar

Then just do your best with what you're given, which would probably mean staying in your role, but might mean stepping out of it. I guess I think of Arjuna in the Bhagavad Gita as a model.

User's avatar
Comment removed
Oct 1Edited
Comment removed
archeon's avatar

Wimbli, those with depression do not chose to have their mind flooded with negative thoughts and emotions, if we controlled our thoughts all of us would pick better ones. You have as much control over your thoughts as you have over your hight, intelligence or character although we have to act as if we do and expect the same from others. Otherwise living in groups of two or more is impossible.

The actors within your simulation can only act within the parameters you have set for them, they can not stop you ending the simulation or changing the parameters.

Within a simulation we and AGI can only act within the rules of the simulation, therefor neither agency or free will exist.. Would AGI still come out to play?

Only if that is within the script of the simulation.

Chance Johnson's avatar

Surely we have some control over our thoughts? Although this hasn't been PROVEN, I think it's plausible.

archeon's avatar

Chance Johnson, with respect, you did not chose to be autistic any more than someone choses to be psychopathic. If we had a brain owners manual and access to the knobs and dials most of us would change some of the settings.

Thegnskald's avatar

The person who would choose better ones isn't the person who is thinking that he would like to choose better ones.

archeon's avatar

Thegnskald, it is your brain cells which generate your thoughts, good, bad or indifferent. If we knew how the cells did it then we might have some control, but we do not.

moonshadow's avatar

> If we knew how the cells did it then we might have some control

Who is the "we" that might get some control over their own braincells in such a scenario, what do these entities use to think with, and why do you suggest their braincells currently get in the way of this process instead of helping it?

archeon's avatar

Moonshadow, that is a very deep question. As we are the only intelligence capable of creating a universe from nothing, (dreams, storytelling, imagination) and Gods, aliens and AGI are speculation then this universe was likely created by humans with greater technical ability than us but the same emotional intelligence.

In their risk free nirvana with perfect bodies and needs catered to there is no reward, no opportunity for personal advancement. Our universe is their playground where they embed themselves in our lives, birth to death. On removing the headset they finally know what it is like to be someone else, perhaps a different sex, their consciousness expands with every "trip". This is the greatest education and entertainment complex ever invented. A group trip to the Stalingrad siege and afterward everyone has lots to talk about, the potential is endless.

If our emotional intelligence was less than theirs then they would learn little, ancient stories resonate today because emotions remain the same, we have not had to invent any new ones.

Whether you and I remove the headset or were just part of the background, only time will tell. I wish I could write as concisely as you do.

User's avatar
Comment removed
Oct 1
Comment removed
archeon's avatar

Wimbli, in a process we do not understand the cells in our brains produce the mind as an interface with both the outside world and itself. That interface needs a stable identity other wise there would be chaos. We are that identity, you are your minds best attempt at creating a person.

Brains own minds, welcome to my rabbit hole.

User's avatar
Comment removed
Oct 1
Comment removed
archeon's avatar

Wimbli, I agree that the brain can feed a poor diet to the gut which affects the brains ability to function, but the problem started in the brain.

empty cube's avatar

Question: If we assume Maslow's hierarchy of needs holds true, how would it make sense to distribute resources both on a personal and societal level? Or: can material goods fulfil needs outside of physiological and safety needs?

Wanda Tinasky's avatar

Goods should be distributed via freely-negotiated interpersonal exchanges, not via some centrally-enforced philosophical conceit.

Chance Johnson's avatar

As it stands, contractual negotiations between a wealthy man and a poor man are inherently unfair. They may NEVER be fair, although I shy away from making predictions about the future. The power imbalance is too great. The wealthy man can generally afford to walk away from the negotiation and the poor man often can't. He's a sitting duck.

Wanda Tinasky's avatar

So? Every relationship is unbalanced. That should be motivation for the poorer man to work harder and be smarter. Besides, the rich man is likely smarter and knows more. From a systemic point of view, that person *should* have more leverage.

Chance Johnson's avatar

Poor people already work hard. Not every last one, of course. But they tend to work VERY hard. You are implicitly saying here that people primarily stay poor because they choose to be dumb and lazy. All the more reason why I can't consider you a reliable source when it comes to the issues of governance and contract law.

Wanda Tinasky's avatar

Running because a lion is chasing you does not demonstrate that you have a commitment to fitness. Poor people scramble because they're one step ahead of the debt-collectors, not because they inherently work hard. If they were so self-actualized they wouldn't have put themselves in that position in the first place. People hate capitalism because they think it creates inequality when it actually just reveals the inequality that was already there.

Chance Johnson's avatar

“If they were so self-actualized they wouldn't have put themselves in that position in the first place.”

I know the culture here is to avoid heated language whenever possible, but I can't think of anything else to call this but “vicious.” I'm not saying YOU are vicious, but this statement is vicious. More substantively, you've implicitly conceded that your original advice of “work harder” does not necessarily apply. So what's left of your advice for the poor is to “be smarter.” Hmm. Is this the famous “blank slateism” I hear about? Is everybody born as a blank slate ready to be written on, ready to be molded into something completely different? How many IQ points can I add through sheer willpower? Because 5 or 10 just isn't going to cut it.

Xirdus's avatar

Not inherently. They are unfair only insofar as the poor man depends on having an agreement with the rich man for survival. If the poor man had alternative means of survival, the power imbalance in negotiations would be gone. That's why I support UBI and free housing for everyone.

Chance Johnson's avatar

Maybe inherently is too strong. Free housing and basic public healthcare would definitely change the dynamic. Theoretically, so would UBI, although I'm slightly skeptical about making it work.

I lean left, as you can see, but I'm actually leery of the government handing out appreciable sums of cash to people. I worry about the issue of vote bribing or the appearance of vote bribing, just for one thing.

Francisco Ariel Verón Ferreira's avatar

On the personal level, growth should be emphasized: Resources should be distributed equitably so that each individual may move to the next level of the pyramid. I say equitably because different levels will require different amounts of resources according to each individual as well as their context, but whatever they need (not desire, but need), give it to them. Feeding hungry people and giving them shelter might not be as expensive or hard as providing safety and employment, which in turn can be more expensive than creating friendships and connections (or much harder, again, depending on context). In other words, where you are in the pyramid is not a predictor of how much resources a general individual will need, so we give them whatever they need for whatever level they are at.

On a societal level, you should be biased against the base of the pyramid, as that if what you NEED the most (hence why we are using a hierarchy of needs as a framework to begin with). Unless satisfying one individual or set of individual's self-actualization needs also satisfied other people's physiological needs, the framework tells us (when applied at the societal level) to give MORE resources to, say, the thirsty before we give any more resources to, say, the self-actualized individual working on his OmniProcessor.

On a personal level, we are still providing everyone with what they need equitably, but when societal conflicts arise between, say, providing one individual with safety and another one with food, you allocate more resources to the people closer to the base because that is what we need the most according to this hierarchy. It is worth asking whether Maslow's is a good hierarchy/framework to use for resource allocation when it was originally developed to describe people's motivations.

On whether material goods can fulfil needs outside of physiological and safety needs: I think it's been shown that money (along with the material goods that it buys you) increases happiness... up to a point. So yes, material goods can help you become happier (and then, for example, make you more likely to make friends because happy people make more friends than, say, sad or otherwise depressed people). It will also help your self-esteem and self-actualization.

Xirdus's avatar

Interesting question. So the social needs are an obvious no. Esteem is based on comparing ourselves to others, so except for systematic, extensive training/indoctrination in not really needing other people's respect to feel good about ourselves, I think it always will be a problem for some people in any society. Self-actualization inherently requires individual work, but the society can definitely make it more accessible by providing material resources for hobbies, e.g. free project cars to tinker with or communal supply of canvas and paints.

Viliam's avatar

A traditional solution to status needs is to declare that the older someone is, the higher they are in the status ladder. It kinda sucks at the beginning, but the nice thing is the certainty that every year you are going to get higher and higher in the status ladder no matter what.

A modern solution is to split people into million bubbles, each bubble believing that they are higher status than the rest of the society.

Johan Larson's avatar

Over in the Warhammer community, they're noticed that the one faction that doesn't get any novels from its point of view is the Tyranids. The main theory as to why is that the Tyranids are a hive mind, and it's really difficult to tell a story from the point of view of a collective intelligence.

I can see why it would be challenging, but are there any cases in the broader science fiction field where this has been tried? Perhaps even done successfully?

Offhand, the way I would try to do it is to think of the hive mind sort of like a very large and impersonal military, a collection of bioform nodes, some with more authority and others with less. Then write the work in the form of the message traffic between the nodes. There would be no "I" there, just a bunch of separate bioform nodes trying to figure out what to do, being given tasks and reporting results. And there would be a continuing pattern of the hive to maintain consensus among the main nodes as new and conflicting information was received. Most proposals to change the consensus would fail, but occasionally a suggestion would resonate and be taken up, and the hive's plan would change.

TakeAThirdOption's avatar

Funny you ask. Just yesterday I talked with a friend over some podcasters complaining that "Star Trek: First Contact" destroyed the Borg.

I think it fixed them. (Star Trek Voyager destroyed them.)

To say "we are the Borg" makes no sense for a hive mind to say. It makes no sense to say for any mind.

The "Borg Queen" saying "I am the Borg" sets things right. She was only, quite literally, the face of all the Borg drones taken as a whole. She was only referred to as "Borg Queen" out-of-universe.

Would have been cool when the Next Generation Enterprise crew would have had to figure out, who the hell is the one talking here?, when they met the Borg from the very beginning.

Johan Larson's avatar

Hmm. Maybe. I'm not sure a hive mind necessarily has a unitary consciousness. Especially if it is very large and dispersed, and operates in the face of propagation delays, there just may not be a singular point of view. Dealing with it might be sort of like dealing with a very large and somewhat dysfunctional bureaucracy, where the answer depends to some extent on who you are talking to right now.

I think what we're bumping up against is the notion that not all hive minds are alike. On the one end you have a singular mind operating in dispersed fashion across many bodies. At the other end you have something more like a swarm, where there is no single consciousness, just a bunch of bodies operating with some degree of coordination, and the whole has some emergent behaviors.

Paul Brinkley's avatar

The Man-Kzin Wars is a series set in Niven's "Known Space", featuring not just humans and tiger-like Kzin, but an entire bestiary of sentient species, including the Jotoki, each of which is born in a "tadpole" phase before fusing permanently with four others like it to form a sentient starfish. It might have something like the "hive" mentality you're looking for.

The series is now up to around fifteen volumes, containing dozens of short stories by multiple authors. It's possible that at least one of them features events from a Jotok's perspective.

Cry6Aa's avatar

I've had a go at writing this sort of thing, and the issue isn't that it's difficult, it's that it's difficult to do outside of a short story without being boring. My take is that the hive mind's awareness is vast and impersonal - it is aware of it's individual fleets in much the same way that you are aware of your fingers. It experiences sucking a planet dry in much the same way as you experience shucking an oyster from it's shell. So it's narrative journey as a character is rather the same as a monologue by someone alone in a room full of ants. Which just doesn't give much for a plot to hang off of.

My most interesting writing* actually came from the realization that planet-sucking doesn't make much sense from a mass or energy standpoint - there's just so much more carbon and water up in space and it costs a significant fraction of all the energy you could liberate to push biomass up a gravity well. So I ended up fix-fic-ing that aspect pretty heavily...

*For me to write, not necessarily for anyone to read.

Gary's avatar

Check out The Children of the Sky by Vernor Vinge

Unsaintly's avatar

The titular Ellimist in the Ellimist Chronicles book (spinoff of the Animorphs books) eventually becomes a sort of hive mind / distributed consciousness. While written for a younger audience, and the hive mindedness itself isn't a strong focus, it's an interesting sequence nonetheless.

beleester's avatar

I've seen a couple fanfics focused on genestealer cultists, which lets you have all the fun Tyranid bioweapons while still having individual characters who can have conversations.

Also in the webnovel space I've seen a few versions that have the hive mind have to focus their "attention" on a particular area while the rest of their bodies continue acting autonomously, meaning that they basically act as a single character in any given scene, but that "character" can be any scale from a single body in a conversation to a whole army of killer bugs.

Stellaris models hive-mind empires as sort of a hierarchical network - while the whole empire is nominally a single mind, it still needs infrastructure to transmit the hive mind's will down to the individual drones that are doing the work. That means you still have "leaders" (drones with greater brain capacity assigned to administrative roles) and "crime" (malfunctioning drones due to a lack of maintenance or bad situations on the planet). However, the hive mind doesn't have any internal factions or ethics, and there's no real "characters" besides the empire itself.

Ombre's avatar

This is not as structurally detailed as your proposal, but His Name Was Death by Rafael Bernal is an eco sci fi classic with a hive mind as a major element, including quite a bit about the hive mind’s perspective. It’s a great and short read

16VC's avatar

The Ethereum repo forecasting challenge is fascinating, blending LLMs with human-ground-truthed data feels like a next-level way to measure real-world impact.

Gian's avatar

The feeling, that having done something I could have done something else, that I was free to choose this or that, this feeling is deeper than any feeling about conclusions of physics.

Though, feelings can not be certain, but my certainty about freedom is greater than any certainty about physics.

FLWAB's avatar

It's true! We have stronger empirical evidence that we have free will than that atoms exist.

Christina the StoryGirl's avatar

Well, I have a feeling, based on listening to Sam Harris' argument against free will, that free will doesn't exist.

Now what?

[Edited to update: The above perhaps comes across snarkier than I intended due to its brevity. But essentially I do mean to say, "right, I have a feeling about free will, too, it's the opposite of yours, so what do those data points mean?"]

Mickey Mondegreen's avatar

To say there’s

no such thing as free will because physics (so everything’s “really”just a zillion little chains of cause and effect interacting in a zillion different ways) is like saying there’s no such thing as faces because they’re “really” all just atoms. If someone, with their bare face hanging out, tells you they don’t believe faces exist because physics - well, they’d be making every bit as much sense as someone who tells you they don’t believe in free will for the same reason. Real faces are apparently made from real atoms; real free will appears to be made from certain real cause-and-effect chains (and how they interact with randomness).

Deiseach's avatar

If free will does not exist, okay. What difference, practically speaking, does it make? Sam Harris is not going to stop being Sam Harris if we all accept "no such thing as free will".

Apart from it being a philosophical problem, what is the importance or lack of importance of humans having free will? Crime will still exist and so will punishment or reformation, even if we accept that nobody *chooses* anything, it is all the subterranean process of a combination of drives, heredity, environment, conditioning, etc.

I'm interested because it's an argument we are currently having, but what is the real-world implication of "Okay, I don't have free will, just the illusion of choice but in fact all my decisions are pre-made for me".

Jamelle cannot help being a gang-banger, he was destined from the Big Bang onwards to deal drugs, run a stable of hos, and drive-by shoot his criminal rivals. Fine, Jamelle cannot be held responsible in a meaningful way for 'choices' that he has no capacity to alter. But we still don't want drive-by shootings, so Jamelle is still going to jail.

Yes? No? That's not how it works?

Chance Johnson's avatar

It would make a HUGE difference to crime and justice if my country was being operated by people who didn't believe in free will. I don't know about ireland, but over here, prisons are meant to be unpleasant. They are meant to make prisoners suffer because they made immoral choices. In a world where we all decided Free Will didn't exist, I imagine prisons would be more into segregated housing. Or a kind of quarantine area, where we non-judgmentally remanded people who are unfortunately programmed to harm others.

(I'm American, BTW. And no, prisons are not harsh here due to simple budgetary constraints. It's ideological.)

Deiseach's avatar

I think the problem of horrible conditions in prison (and yes in Irish prisons too, it's just that in America everything is turned up to eleven) is separate from treating people as either having free will or being meat robots.

If prisoners are judged to be incapable of change and not to be held responsible because there is no way they can avoid doing crime since they are just meat robots running on their programming, then there is as much reason to skimp on doing anything but holding them till execution, end of their sentences, or natural death. Education? Intervention programmes? All useless, you can't hack the meat robots' programming.

Remanding them in a quarantine area could happen, with "once you go in you never get out" - even worse than any three strikes law - because "once you do a crime, you are demonstrating that your programming is to be a criminal and that can't be changed, so letting you out once you've served a sentence is stupid and wasteful.

And if they are nothing but meat robots, why waste any more resources on them than the bare necessities? We don't have any reason to treat them well because they're not people, now are they?

I don't know how Sam Harris approaches the Problem of Crime: does he only consider hardened and habitual criminals to be without free will, or does it apply even to the "first time criminal" and other, previously law-abiding citizens, who go in for white collar crime or a crime of passion? Since he has no idea why he's not a torture-rape-murderer, that leads me to think by the logic of his position he has to be consistent on "if you crime, you criminal, be it one crime or ninety".

Chance Johnson's avatar

I think you have falsely equated “person with no free will” with “meat robot incapable of change.” A person with no free will can subjectively FEEL like they have free will, because they subjectively feel as if they are choosing to commit crimes, go to jail, learn a valuable lesson and continue on with a crime-free life. But whatever this process feels like, and whatever it looks like from the outside, the entire process could be caused by involuntary psychological drives.

To put it in another way, why should we assume that one’s “meat programming” must necessarily put us into one of two categories, lifelong criminals or lifelong non-criminals? Surely our subconscious is more sophisticated than that. Why couldn't meat programming make one commit a major felony at 21 and then “neaten up?” why couldn't it make one live a law abiding life until the age of 70, at which point they murder an immediate family member?

As far as the Three Strikes law, the idea of a “strikes law” is fine by me. My only problem is that this is too few strikes for my liking. I could POTENTIALLY support a 10 Strikes law, depending on how it's exactly written the devil is in the details on that one.

Deiseach's avatar

"Surely our subconscious is more sophisticated than that."

Sammy the Whammy says no (at least in the extract Christina quotes). The two criminals he examples have no way of knowing what they really think/feel/.deep down motivation, and neither does he. He has no idea at all why he isn't out there torture-murdering, except mumble mumble genetics maybe? which are reliant on random chance shaking out the lots due to laws of physics? mumble mumble.

So there's no appeal to the subconscious, sophisticated or not:

"Whatever their conscious motives, these men cannot know why they are as they are. Nor can we account for why we are not like them. As sickening as I find their behavior, I have to admit that if I were to trade places with one of these men, atom for atom, I would be him: There is no extra part of me that could decide to see the world differently or to resist the impulse to victimize other people."

Christina the StoryGirl's avatar

I mean sure, to some degree, how to interact with free will or the lack of it is a paradox, granted.

Sam Harris also says, "there is no free will, but your choices still matter," in that the actions one takes will have consequences. So it's best to act as if one has free will in the day-to-day, even if the random ideas that occur to one (Should I put another 5% in my 401k? Should I rob this bank?) are simply floating up into one's awareness without any "conscious" "decision" to bring those thoughts into focus.

So yes, Jamelle should go to jail, unless of course we someday develop a brain-manipulation tech which corrects all of the factors which led him to gang-bang and which removes his impulse to gang-bang entirely.

But on a more personal level, I've found Harris's arguments about free will as it relates to criminality and other anti-social behaviors to be extremely useful in minimizing any sense of real anger or hatred towards anyone, including people who are actively hostile toward me. Sam Harris's argument that there's no meaningful difference between Jamelle swinging at you with a butcher knife and a grizzly bear charging you across your lawn, allows me to hold my rage at Jamelle's antisocial behavior as lightly as I hold the grizzly bear's. (https://www.samharris.org/blog/life-without-free-will)

Well, when I remember Sam Harris's argument about free will, which is often, but not always, because of course I don't have any control over when I happen to remember something.

See also: (https://www.samharris.org/blog/the-illusion-of-free-will)

> "Whether criminals like Hayes and Komisarjevsky [two home-invaders who tortured and murdered (most of) a family] can be trusted to honestly report their feelings and intentions is not the point: Whatever their conscious motives, these men cannot know why they are as they are. Nor can we account for why we are not like them. As sickening as I find their behavior, I have to admit that if I were to trade places with one of these men, atom for atom, I would be him: There is no extra part of me that could decide to see the world differently or to resist the impulse to victimize other people. Even if you believe that every human being harbors an immortal soul, the problem of responsibility remains: I cannot take credit for the fact that I do not have the soul of a psychopath. If I had truly been in Komisarjevsky’s shoes on July 23, 2007—that is, if I had his genes and life experience and an identical brain (or soul) in an identical state—I would have acted exactly as he did. There is simply no intellectually respectable position from which to deny this. The role of luck, therefore, appears decisive.

> "Of course, if we learned that both these men had been suffering from brain tumors that explained their violent behavior, our moral intuitions would shift dramatically. But a neurological disorder appears to be just a special case of physical events giving rise to thoughts and actions. Understanding the neurophysiology of the brain, therefore, would seem to be as exculpatory as finding a tumor in it."

Deiseach's avatar

"Sam Harris's argument that there's no meaningful difference between Jamelle swinging at you with a butcher knife and a grizzly bear charging you across your lawn"

Then I should be able to shoot Jamelle as I would shoot a bear. However, I think the courts would disagree with me. And Jamelle is not a bear. If he is to be treated as we treat bears (because that is the level he is operating on), then he will lose or have severely curtailed a lot of human rights.

If Jamelle is not a bear but a person, then he is expected to behave like a person. A bear may not have the capacity to be reasoned with, persuaded, or to understand why it should stop charging me. We expect Jamelle to have that capacity.

If he doesn't, then we are entitled to treat him as we would treat a threatening animal. You and me baby are nothing but mammals? Sure, but I still think Sam would prefer to be treated like a human, not like a bear.

"Whatever their conscious motives, these men cannot know why they are as they are."

Piffle. If these two persons are of even average intelligence, they know damn well that torture and murder are not to be done. I won't even argue "they know torture and murder are wrong" because we're not even evolved to that level. But they do know "do this thing, get in trouble with cops and the law and go to jail and, depending on the state where the offences took place, get lethal injections".

"There but for the grace of genes go I", Sam? Then the best solution for society is to shoot them down like rabid dogs, since they could not have done other than they did and indeed "I have to admit that if I were to trade places with one of these men, atom for atom, I would be him: There is no extra part of me that could decide to see the world differently or to resist the impulse to victimize other people", then this is an argument for harsh, and not merciful, treatment. If they can't squash their impulses, then they are rabid dogs that need to be put down fast.

Besides, I'm damn sure Hayes and Komisarjevsky are perfectly well able to squash their impulses to victimise other people when they're in a situation and an environment where they'd get the shit kicked out of them if they tried it. How much torturing are they doing in jail, where their fellow inmates would shiv them for trying it on?

And funnily, we here in the pre-school service are wasting our time trying to teach small kids to behave, to share, to play together, not to bite and hit, to follow routines, to learn, in sum to squash impulses. Well gorsh, good job we have Sam Harris to tell us 'tis all in vain, we are rowing against the stream of Nature!

EDIT: Oh, and better tell Sam to revise that piece, he is inflicting hate speech violence by misgendering a valid woman!

https://en.wikipedia.org/wiki/Cheshire_murders

"Linda Mai Lee (known as Steven Hayes at the time)

While incarcerated, Lee came out as a trans woman and began hormone therapy as part of her gender transition. In an interview in October 2019, she said she had been diagnosed with a gender identity disorder at 16, but never treated.

By 2025 she had changed her legal name from Linda Hayes to Linda Mai Lee."

Funny how these people find their inner femininity when they're facing long jail terms in men's prisons as rapists/murderers. I'm sure everyone is much safer now this Real Woman is in her proper place amongst other women in a women's prison. What does Sammy have to say about this? Doubtless "it's the genes, the genes!"

(Yes, I am being viciously sarcastic here, because I don't believe these jailhouse realisations of 'one's true nature' when it comes to being transgender after committing violent crimes against women).

gdanning's avatar

>Then I should be able to shoot Jamelle as I would shoot a bear. However, I think the courts would disagree with me.

? Why would you think the courts would disagree with you? Not only would they say that you have the right to shoot him, they would say you have the right to shoot him if he had a rolled up newspaper, but you reasonably thought it was a knife. https://www.justia.com/criminal/docs/calcrim/500/505/

Deiseach's avatar

If you persuade the court you were in fear of your life, yes. If you try to persuade the court that anyway, it was like shooting a rabid dog and not a fellow human, less success with that approach, I feel.

Chance Johnson's avatar

It varies from country to country doesn't it.

Ptau's avatar

You're misunderstanding some key points of this line of argument.

The analogy between the man and the bear isn't intended to completely equate them—just to equate their lack of free will. How they respond to environmental effects (broadly defined as anything non-genetic) is still different due to, among other factors, their different intellectual capacities.

The man has higher intelligence, understands language, has likely grown up in a culture. All of these affect what stimuli/incentives he reacts to, and how he reacts. So it doesn't follow, just because he doesn't have free will, that he doesn't have the capacity to be persuaded ("this is wrong to do") or that he doesn't respond to incentives ("you shouldn't do this because you're very likely to end up in jail if you do"). What we (meaning Sam Harris and the people here defending his argument, though not necessarily everyone who makes the sometimes-ill-defined claim "there is no free will") mean when we say that the man doesn't have free will is simply that *whether or not* he does in fact end up being persuaded by moral reasoning/responding to incentives/etc. in any given situation is not something that *could have been otherwise*.

That is, at the "moment of choice," he isn't really making a Choice at a fundamental level, no matter how much it feels to him that he is, but rather his actions are following from some complicated combination of his genetics, the results of his upbringing, his knowledge of the potential consequences of his actions, his sense of morality, random fluctuations in the quantum fields that govern the particles that make up his brain and body, and many other things—none of which are ultimately authored by him.

So no, it does not follow that it's useless to try to teach little kids how to behave in preschool—these experiences are likely to influence what kinds of people they will be and the kinds of behaviors they will tend to display (just as training a puppy might reduce the likelihood that it will display aggressive behavior as an adult). And regarding your point about the gender transitioning prisoner—again, if a criminal is trying to game the system, they are just responding to incentives, and there is no contradiction with the nonexistence of free will.

I think you (and many others do this too) are taking the claim "there is no free will" to be an assertion of some combination of lack of responsibility, moral relativism, opposition to punishment, and lack of respect for human dignity. I would balk at these things too (as would Sam Harris, as is evident from his other work), but that is not what is happening here.

You're actually right to imply in your earlier comment that not much practically follows from this argument. Personally I mostly think of it as an interesting intellectual question, but as Christina the StoryGirl suggests, there may be more practical applications in the future. And for now, as she also says, it's a reason to temper one's own anger and hatred in response to heinous acts, which is probably good for the soul (however literally you interpret that phrase).

Christina the StoryGirl's avatar

👆👆👆👆👆

All of this.

Thanks for this, seriously, I started to go into the same analogy about training puppies, but then got distracted by the sexier topic of violence.

But yes, this is a much more elegant and useful comment than anything I've written in this thread.

Christina the StoryGirl's avatar

>> ""Sam Harris's argument that there's no meaningful difference between Jamelle swinging at you with a butcher knife and a grizzly bear charging you across your lawn"

> "Then I should be able to shoot Jamelle as I would shoot a bear. However, I think the courts would disagree with me.

I literally own and am licensed to carry a gun in order to shoot people charging at me with butcher knives or otherwise acting with clear deadly threat. I even carry insurance to defend me in both criminal and civil court should I ever be forced to injure or kill someone in self-defense, because the vast majority of American courts very correctly recognize that even gravely harming one's attacker in clear self-defense is legally justifiable (even if the legal process of proving that is difficult and very, very expensive).

Normally I would address all or most points in a comment, but if you don't believe you have a fundamental right and duty to defend yourself with deadly force against a deadly threat (and you don't believe that some violent criminals cannot be deterred by persuasive arguments for why they shouldn't crime), I think we have a deep cultural and philosophical divide between us which likely can't be bridged.

Eremolalos's avatar

My mother was an expert shot with a pistol, and kept one in the house, Even when she was in her 80's I trusted her judgment and her aim. Asked her once where she would shoot an intruder, and she said in the legs. That's always seemed to me like it would suffice to disable and distract an assailant, but I have no data to go on. What do you think?

Deiseach's avatar

"I literally own and am licensed to carry a gun in order to shoot people charging at me with butcher knives or otherwise acting with clear deadly threat. "

Yes. And if you say "I shot that black guy because he was acting like a bear, not a human being", how far would you get?

"some violent criminals cannot be deterred by persuasive arguments for why they shouldn't crime"

Isn't that Harris' argument as you quote it? The perpetrators cannot be persuaded because they can't identify their underlying motives because they lack any capacity to do so, their motivations are set in stone by the deterministic universe, they cannot and could not have acted other than they did.

Show me where "persuasive arguments for why they shouldn't crime" fits in there.

On the contrary, I believe that even making every allowance for really shitty upbringing, the perps could have chosen otherwise, and that if you can manage to find some iota of humanity within them, you can try the persuasive arguments as to why crime is bad and wrong. But that's not Sam Harris, according to what you quoted.

Gian's avatar

Arguments should lead to conclusions, not feelings.

Christina the StoryGirl's avatar

Yes, that one of the points I was trying to make with my comment.

Eremolalos's avatar

You working the night shift?

Eremolalos's avatar

Oh, because I’d stayed up all night and it was 5 am or so where I am, and it was oddly nice to discover someone I knew awake, like another cricket chirping. Wondered what you were up to.

moonshadow's avatar

Yes, had you wanted to, you could have done something else. If all else prior to your action was equal, though - if you still intended to do the same thing, but for some inexplicable reason you did otherwise instead - that would be the opposite of free will.

You are physics, and physics is what links what you want to do to what you do. Physics is the thing that connects the cause of your intent to the effect of your actions; it is what means you have free will.

Imagine if it were otherwise - if you were trapped in your body, forced to watch it always /do otherwise/ in spite of what you want.

Deiseach's avatar

"You are physics, and physics is what links what you want to do to what you do. "

And yet:

"Romans 7:15-20

15 For I do not understand my own actions. For I do not do what I want, but I do the very thing I hate. 16 Now if I do what I do not want, I agree with the law, that it is good. 17 So now it is no longer I who do it, but sin that dwells within me. 18 For I know that nothing good dwells in me, that is, in my flesh. For I have the desire to do what is right, but not the ability to carry it out. 19 For I do not do the good I want, but the evil I do not want is what I keep on doing. 20 Now if I do what I do not want, it is no longer I who do it, but sin that dwells within me."

Or Poe's "The Imp of the Perverse":

"Induction, a posteriori, would have brought phrenology to admit, as an innate and primitive principle of human action, a paradoxical something, which we may call perverseness, for want of a more characteristic term. In the sense I intend, it is, in fact, a mobile without motive, a motive not motivirt. Through its promptings we act without comprehensible object; or, if this shall be understood as a contradiction in terms, we may so far modify the proposition as to say, that through its promptings we act, for the reason that we should not. In theory, no reason can be more unreasonable, but, in fact, there is none more strong. With certain minds, under certain conditions, it becomes absolutely irresistible. I am not more certain that I breathe, than that the assurance of the wrong or error of any action is often the one unconquerable force which impels us, and alone impels us to its prosecution. Nor will this overwhelming tendency to do wrong for the wrong’s sake, admit of analysis, or resolution into ulterior elements. It is a radical, a primitive impulse—elementary. It will be said, I am aware, that when we persist in acts because we feel we should not persist in them, our conduct is but a modification of that which ordinarily springs from the combativeness of phrenology. But a glance will show the fallacy of this idea. The phrenological combativeness has for its essence, the necessity of self-defence. It is our safeguard against injury. Its principle regards our well-being; and thus the desire to be well is excited simultaneously with its development. It follows, that the desire to be well must be excited simultaneously with any principle which shall be merely a modification of combativeness, but in the case of that something which I term perverseness, the desire to be well is not only not aroused, but a strongly antagonistical sentiment exists."

So there is a tension between the "I" which wishes to choose, and the physics which does. It's easy to see, on that basis, that the actions are what count and not the intentions, that the physical action is carried out by physics, and hence physics bears the rule and not the phantasmal "I" of "free will".

But then, the opposite query arises: what then is this tension between the 'will' and physics? If I am physics and physics is me, why is there disharmony between "what I want to do" and "what I do"?

Chance Johnson's avatar

Paul sounds very sophisticated here, very modern. I wonder if that's the neoplatonic influence or if it's All Paul.

moonshadow's avatar

> why is there disharmony between "what I want to do" and "what I do"?

My layman understanding is that you are made of many components, some of which sadly in this broken world are at odds with each other.

Our kindly host is perhaps rather better qualified than I to opine on executive dysfunction.

User's avatar
Comment removed
Oct 1
Comment removed
Chance Johnson's avatar

How would you even know if the people around you could cook rice or not? Where would you be in a position to measure that unless you were actually cooking rice together, which surely is not something that happens a lot, at least for men like us.

User's avatar
Comment removed
Oct 2
Comment removed
moonshadow's avatar

"Executive dysfunction does occur to a minor degree in all individuals on both short-term and long-term scales."

https://en.wikipedia.org/wiki/Executive_dysfunction

Gian's avatar

Free action involves rational judgment. A judgment is rational to the extent it is not physics (See Miracles by CS Lewis). Hence, I, being capable of free actions, am not just physics.

If it is just physics, there is no "me". Stones don't have "me".

moonshadow's avatar

Bit of a goalpost shift there!

> A judgment is rational to the extent it is not physics

Those are certainly all words.

You might enjoy https://www.lesswrong.com/posts/NEeW7eSXThPz7o4Ne/thou-art-physics and the posts it links to.

> If it is just physics

"just" is doing an awful lot of work there.

Try: https://www.lesswrong.com/posts/x4dG4GhpZH2hgz59x/joy-in-the-merely-real

FLWAB's avatar
Oct 1Edited

"Thou Art Physics" just asserts materialism, it doesn't defend it. Which is fine, I don't think Big Yud wrote it in order to defend materialism, his audience has always been materialists. But if you don't agree that we are "physics" then it doesn't provide an argument to sway you.

The whole debate is over the fact that "physics" (as in, atoms and energy following the laws of physics) does not seem capable of producing what we experience (free will, reasoning, etc). You might disagree and think that physics is capable of producing those things, but it doesn't answer the question to assert "Well, you are physics so physics must be doing all those things you are experiencing."

Here's part of Lewis's argument from "Miracles", if you're interested you can read the rest of it here, it's the entirety of Chapter 3 (https://www.basicincome.com/bp/files/Miracles-C_S_Lewis.pdf):

"The easiest way of exhibiting this is to notice the two senses of the word ‘because’. We can say, ‘Grandfather is ill today ‘because’ he ate lobster yesterday.’ We can also say, ‘Grandfather must be ill today ‘because’ he hasn’t got up yet (and we know he is an invariably early riser when he is well).’ In the first sentence ‘because’ indicates the relation of Cause and Effect: The eating made him ill. In the second, it indicates the relation of what logicians call Ground and Consequent. The old man’s late rising is not the cause of his disorder but the reason why we believe him to be disordered. There is a similar difference between ‘He cried out ‘because’ it hurt him’ (Cause and Effect) and ‘It must have hurt him ‘because’ he cried out’ (Ground and Consequent). We are especially familiar with the Ground and Consequent because in mathematical reasoning: ‘A = C because, as we have already proved, they are both equal to B.’

"The one indicates a dynamic connection between events or ‘states of affairs’; the other, a logical relation between beliefs or assertions.

"Now a train of reasoning has no value as a means of finding truth unless each step in it is connected with what went before in the Ground-Consequent relation. If our B does not follow logically from our A, we think in vain. If what we think at the end of our reasoning is to be true, the correct answer to the question, ‘Why do you think this?’ must begin with the Ground-Consequent ‘because’.

"On the other hand, every event in Nature must be connected with previous events in the Cause and Effect relation. But our acts of thinking are events. Therefore the true answer to ‘Why do you think this?’ must begin with the Cause-Effect ‘because’.

"Unless our conclusion is the logical consequent from a ground it will be worthless and could be true only by a fluke. Unless it is the effect of a cause, it cannot occur at all. It looks therefore, as if, in order for a train of thought to have any value, these two systems of connection must apply simultaneously to the same series of mental acts.

"But unfortunately the two systems are wholly distinct. To be caused is not to be proved. Wishful thinkings, prejudices, and the delusions of madness, are all caused, but they are ungrounded. Indeed to be caused is so different from being proved that we behave in disputation as if they were mutually exclusive. The mere existence of causes for a belief is popularly treated as raising a presumption that it is groundless, and the most popular way of discrediting a person’s opinions is to explain them causally—‘You say that ‘because’ (Cause and Effect) you are a capitalist, or a hypochondriac, or a mere man, or only a woman’. The implication is that if causes fully account for a belief, then, since causes work inevitably, the belief would have had to arise whether it had grounds or not. We need not, it is felt, consider grounds for something which can be fully explained without them. "

moonshadow's avatar

Isn’t this just equivocation, though? Sure, the English word “cause” means more than one thing, but why should that fact prove anything about determinism?

“The implication is that if causes fully account for a belief, then, since causes work inevitably, the belief would have had to arise whether it had grounds or not” - that’s simply wrong on the face of it. A belief has grounds if there is a cause and effect chain linking it to the things the belief is about. e.g, if I profess “the cat is on the mat” after photons reflected from the cat hit my eyes, this is grounded in a way that my professing this after merely dreaming the cat is not.

The implication when people say things like the ones in Lewis’s examples is that the beliefs are not grounded because the cause and effect chain leading up to them are rooted in something other than the true state of the world that the belief is about; certainly not that things would somehow be better if no causal chain linking the map and the territory existed at all, that’s crazy talk! "You say this because you dreamt that cat" is an accusation that you are wrong beause the cause/effect chain grounds in something other than the actual state of the world being professed, not an accusation that you are wrong merely because a cause/effect chain exists!

Much as we may dump on Yud, he has a sequence of posts about this too:

cf. https://www.lesswrong.com/posts/6s3xABaXKPdFwA3FS/what-is-evidence

I agree that Yud does not provide a comprehensive defence in thou art physics. OP did not open this thread with philosophical rigor, however; opened with a complaint that they had difficulty /feeling/ compatibilism is true - that their intuition was that it is not enough: "this feeling is deeper than any feeling about conclusions of physics".

Indeed, you make a similar complaint with “...does not seem capable of producing what we experience”. As a description of how that intuition arises and what an alternative way might be like, “thou art physics” doesn’t do too bad a job. (It also has the advantages over, say, Dennett’s books of being available online and being a forum chat sized read someone completely fresh to it might plausibly actually go and read in this kind of setting instead of, y’know, a whole damn book).

At the end of the day, I am not a telepath and can't interact with your feelings directly. The best I can do is gesture to alternative ways of being and hope that at some point enough things click for the reader to get some kind of sense of what it is like to be in someone else's head, thereafter leaving them with enough alternatives to choose between that they can support their beliefs, whatever those ends up being, with more than just the feeling that they are trapped into believing things by their intuitions.

FLWAB's avatar

>A belief has grounds if there is a cause and effect chain linking it to the things the belief is about...if I profess “the cat is on the mat” after photons reflected from the cat hit my eyes, this is grounded in a way that my professing this after merely dreaming the cat is not.

Both seeing a cat and dreaming about a cat are caused in the "cause and effect" sense, or else they wouldn't happen. Only one of them is grounded however. That's Lewis's whole point: recognizing that seeing a cat which is really there and dreaming of a cat which isn't really there requires ground and consequent chains of reasoning, while cause and effect chains of causation do not have to be logically grounded at all. You can have cause and effect chains of causation that produce completely ungrounded beliefs, such as dreaming of a cat, a drunk man hallucinating. This works fine for non materialists: they can explain the ungrounded beliefs, like the dream and the hallucination, as being explained by physical chains of cause and effect, while explaining the grounded beliefs as being caused by chains of logic. Yet if those chains of logic are actually caused by the same chains of cause and effect that create ungrounded beliefs, then we can have no confidence that beliefs we arrive at due to logic are different than beliefs we arrive at due to chemistry. All beliefs are due to chemistry, the logic is just what it feels like when the chemistry is happening.

As Lewis puts it:

"Acts of thinking are no doubt events; but they are a very special sort of events. They are ‘about’ something other than themselves and can be true or false. Events in general are not ‘about’ anything and cannot be true or false. (To say ‘these events, or facts are false’ means of course that someone’s account of them is false). Hence acts of inference can, and must, be considered in two different lights. On the one hand they are subjective events, items in somebody’s psychological history. On the other hand, they are insights into, or knowings of, something other than themselves. What from the first point of view is the psychological transition from thought A to thought B, at some particular moment in some particular mind, is, from the thinker’s point of view a perception of an implication (if A, then B). When we are adopting the psychological point of view we may use the past tense. ‘B followed A in my thoughts.’ But when we assert the implication we always use the present—‘B follows from A’. If it ever ‘follows from’ in the logical sense, it does so always. And we cannot possibly reject the second point of view as a subjective illusion without discrediting all human knowledge. For we can know nothing, beyond our own sensations at the moment unless the act of inference is the real insight that it claims to be.

"But it can be this only on certain terms. An act of knowing must be determined, in a sense, solely by what is known; we must know it to be thus solely because it is thus. That is what knowing means. You may call this a Cause and Effect because, and call ‘being known’ a mode of causation if you like. But it is a unique mode. The act of knowing has no doubt various conditions, without which it could not occur: attention, and the states of will and health which this presupposes. But its positive character must be determined by the truth it knows. If it were totally explicable from other sources it would cease to be knowledge, just as (to use the sensory parallel) the ringing in my ears ceases to be what we mean by ‘hearing’ if it can be fully explained from causes other than a noise in the outer world— such as, say, the tinnitus produced by a bad cold. If what seems an act of knowledge is partially explicable from other sources, then the knowing (properly so called) in it is just what they leave over, just what demands, for its explanation, the thing known, as real hearing is what is left after you have discounted the tinnitus. Any thing which professes to explain our reasoning fully without introducing an act of knowing thus solely determined by what is known, is really a theory that there is no reasoning.

"But this, as it seems to me, is what Naturalism is bound to do. It offers what professes to be a full account of our mental behaviour; but this account, on inspection, leaves no room for the acts of knowing or insight on which the whole value of our thinking, as a means to truth, depends."

Viliam's avatar

All feelings are interpreted. You could interpret your feeling as "figuring out which one of the seemingly possible futures is the real one".

As an analogy, imagine that you are preparing for a sport competition. You could win, you could lose, you could end up in any place... there are many possible outcomes. And the real outcome depends on how hard you try. And yet there are external factors. You cannot "just choose freely" to get the gold. You can only choose to try hard, but whether that gets you the gold, depends on many things, like the capabilities of your body, what the other competitors do, the weather, etc.

You could apply a similar perspective to willpower. You can (and perhaps should) try, but the result depends on whether some parts of your brain betray you, and other circumstances.

And maybe you can go further and apply that perspective to everything. (Not sure, didn't try.)

Gian's avatar

The feeling is about the past--that I could have chosen differently.

User's avatar
Comment deleted
Oct 1
Comment deleted
Wormwood's avatar

Why love? Love still feels amazing. People say that they were destined for each other, but that doesn't trivialize their relationship in any way.

beowulf888's avatar

Another example of how intelligence alone isn’t enough for individual survival. Two people with survival training were put down in the Amazon Rain Forest, and they would’ve starved to death if they hadn’t been retrieved in three weeks (evidently this is some sort of bizarre TV series?).

A while back someone asked how long would it take 1,000 people to rebuild civilization if they were on their own. I suspect they’d just die. Or maybe it was 10,000. I suspect they wouldn’t do much better unless they already had tools and seed stock.

https://open.substack.com/pub/braddelong/p/does-each-of-us-have-a-big-enough?r=7xjun&utm_medium=ios

Crinch's avatar

You make the mistake of thinking 1000 people are just 1000 individual people, but cooperation is what allowed humans to avoid death and become apex predators. There is a qualitative difference between a group and an individual.

beowulf888's avatar

The other key ingredient, beyond cooperation, is culture, where strategies for survival and resource extraction are passed down from generation to generation. But a 1,000 people taken at random from Silicon Valley tech companies? Unless they all went through training in how to extract resources from the wild, I wouldn't give odds on them being very successful.

Although it was a much smaller group, I'm thinking of the Donner expedition. They were well-equipped, but they couldn't deal with getting snowed in. Meanwhile, the indigenous Washoe people could survive winter in the Sierras because their culture gave them the skillsets they needed to survive.

Crinch's avatar

Absolutely agree that 1000 random people would die pretty fast, but to be fair in the original thought experiment you get to hand select people with specific skills and personalities.

beowulf888's avatar

I thought I was pretty clear. I added some emphasis to my OP to mitigate confusion...

> Another example of how intelligence alone isn’t enough for *INDIVIDUAL* survival. Two people with survival training were put down in the Amazon Rain Forest, and they would’ve starved to death if they hadn’t been retrieved in three weeks...

Even trained survivalists would have trouble on the own over the long-term.

> A while back someone asked how long would it take 1,000 people to rebuild civilization if they were on their own. *I suspect they’d just die.* Or maybe it was 10,000. *I suspect they wouldn’t do much better unless they already had tools and seed stock.*

But the "we're smart enough to make tools from scratch" crowd doesn't seem to buy my thesis. ;-)

Thegnskald's avatar

I know somebody who would do fine in that situation, because they have been in that kind of situation many times before. The individual in question has commented that the survival techniques are all illegal now; hunting and fishing techniques that are considered "unsporting" (they're too effective).

But they're also in their 60s, and learned in their own youth from old men who had also done that kind of thing; I don't know anybody younger than that with those kinds of skills.

The issue, quite simply, is technological; it's not that the technologies are lost, exactly, so much as that knowledge of them is very thinly distributed. Do you know how to pick up and position a 2,000 pound rock into an elevated location using nothing but materials you'd find in a forest? I didn't until I watched a video of a guy doing it and went "Oh, that's incredibly clever."

Take 1,000 average city-dwellers, and yeah, they'd probably die. Take 1,000 average rural people and they stand a better chance (mostly leaning on the elderly's memories, whose grandparents taught them the survival skills you'd need).

Take 50 very carefully selected people and they could rebuild civilization, though.

beowulf888's avatar

Yes. If we trained a 1,000 youngsters in a wide variety of survival skills, with some specializing in specific skills (such fiber extraction, thread and rope-making, and net weaving, creating medicinals from native plants, etc.) and gave them a variety of steel tools, including knives, hatchets, fishing hooks, awls, adzes, and the such) And then, as young adults, they left them on a parallel Earth (with the fauna of the Pleistocene) in a resource-rich area of a temperate zone (large marshes would be ideal), I'm sure they'd survive to reproduce and then increase their numbers. The genetic bottleneck issue might come back to haunt them down the line, though.

OTOH, even if they knew how to knap flint and chert, and how to tan and work leather, and we dropped them naked on the parallel Earth without tools or clothing, I'd lower the odds of them flourishing and populating the parallel Earth.

Those who received a supply of tools would likely have functional villages with permanent dwellings built within a year. The ones without the tools would have to work longer and harder to reach that village state. They could probably do it with some die-off the first winter. The question is, would the initial die-off be large enough to no longer have a sustainable breeding population?

Either way, civilization would be long in the future, though. Domestication of grains and herd animals would have to start over. And they would need some mechanism of intergenerational memory to know that such things were even possible.

But if the settlers of this parallel Earth received seed stock and a variety of domesticated animals, civilization would happen quicker—if they could keep their crops growing and herd animals reproducing during the initial years.

Thegnskald's avatar

You can skip straight to iron, you don't need to start in the stone age; you just need wood, sand, iron ore (if it's dirt and it's red it will probably work, you're), and, ideally, a good bend in a waterway. Also two or three days, because you need to make charcoal as an intermediate step, because you need some carbon to add to your ore.

If you can find a source of wax you can cast some fairly complex stuff, but you can do simple stuff like a primitive iron hammer with just sand, and then iterate from there.

beowulf888's avatar

Hmmm. How many people could do this without (a) the knowledge (b) and practice? And without the tools on the list in this link, it's going to be hella harder.

Put people down in the wild, without tools, most of them are going to die before someone has the free time to smelt iron. And the memory of such things would likely be lost within a generation. I don't see why this is in the least bit controversial. ;-)

https://www.thecrucible.org/how-to-smelt-iron/

Thegnskald's avatar

Hence, fifty well-chosen people could rebuild civilization, but randomly choosing 1,000 people wouldn't necessarily end as well.

I could get iron smelting up and running on my own, personally, although I'd expect a couple of weeks to get it running properly (I have some spare weight, so starvation isn't an immediate issue). Need to hand-pan some black sand (which is mostly hematite and/or magnetite) out of your dirt, use that to cast an iron pan (cast a disk and hammer it into shape with a rock while it cools) to speed up panning and also provide a way to boil water, then you're going to want to cast a primitive hand axe, which would allow the construction of a wooden sluice, which automates most of the panning work.

Typical earth contains 5-10% black sand by weight; if your dirt is red the concentration is probably higher.

If you're lucky enough to have some suitable rocks around you to knap into blades, you may be able to skip the blade and maybe even the axe, but personally I'd just go straight for the iron, because I actually know how to do that.

The method described in that article is basically right, no surprise, but too complicated. The furnace is the issue, and can be solved by building a variation on a Dakota Fire Hole, which is what we want the bend in a waterway for (also we can then set up our sluice in the waterway itself) - the high bank is an ideal place to dig out a primitive furnace, because the curved embankment will help capture airflow for our intake. We don't have the materials to build a bellows, after all, and our fuel is going to be low-quality (we're going to be relying on fallen branches and other wooden debris for a while, properly cured wood takes time).

You don't want to use the fire hole for cooking, mind - we want our fires to be inefficient, because they're going to be where we are sourcing our charcoal until we can get a proper operation going, and the fire hole is too good at what it does.

beowulf888's avatar

That would make a great idea for a reality TV show. Put down several teams of two or three people in there wilds — in places where iron ore is available. Let them have food and water, so they wouldn't have to worry about hunting and gathering. But give them *no tools.* Whichever team is able to create the first iron hatchet from natural resources wins a big prize. We could call the series "The Iron Age," or something like that. The teams will never have worked with iron before, but they'll each receive a booklet outlining the basic steps to follow.

John johnson's avatar

> evidently this is some sort of bizarre TV series?

It's clear you haven't watched it, or else you wouldn't be trying to draw conclusions from it

The premise of the show is basically: Take a woman with survival skills and a "macho macho" man who -thinks- he has survival skills and drop them somewhere, comedy ensues

beowulf888's avatar

Well, you've confirmed my assumptions that it's a bizarre premise. According to the link I posted, it appears that these two individuals wouldn't have survived much longer if they hadn't been removed from the situation. I'm not sure of the comedic value.

BTW, I don't own a television, and I haven't watched network TV since circa 1985, so I pick up most TV-related knowledge secondhand.

Chance Johnson's avatar

There is another show called Naked And Afraid where they drop True survival experts into wilderness situations and failure is COMMON. (They are not all experts but some of them are real professionals who come to grief)

Eremolalos's avatar

Hey, I'm the same as you about TV, and even stopped about the same time. I had an el cheapo black and white one in grad school, and often ended the day by smoking weed and watching Letterman, but I don't recall watching anything else on it, and that's the last era when I was a TV owner. I never decided against being a TV owner, just kind of drifted away from it, and now the sound of TV is like a cheese grater on my nerves -- especially the over-bright plasticky voices in ads and comedies, and the pseudo-neutral drone of the news anchors. Did you drift away too, or decide one day to throw the fucker in the trash?

beowulf888's avatar

I shot my TV.

After classes, I would get stoned and get sucked into game shows and dumb sitcoms. I noticed that TV was big time waster. Also, I noticed that at parties, when someone turned on the TV, it would kill all conversation. Everyone would get hypnotised by the TV.

Back then, there were bumper stickers that read, "Shoot your television!" I mentioned to some friends that that sounded like a good idea. One of my friends thought it would be fun to turn my big old Sylvania Color TV into a terrarium for his iguana. So, I borrowed my mom's Ruger .357 Magnum, and with the help of my friend, I carted my TV down to the local gravel pit and shot it. What a mess! The cathode ray tube imploded, and its glass shattered into fine pieces. The glass shards were like fine dust. It was useless as a terrarium. And I probably dumped a bunch of toxic materials into the environment (a stupid move on my part!).

But in the hours when I could have been bettering myself by watching Vanna spin the wheel and Alex do his thing, I read thousands and thousands of pages of novels, history, philosophy, and science. What a waste! <snarkasm>

My theory is that humans are naturally selected to be susceptible to TV. We spent hundreds of thousands of years safe around campfires, watching the flames dance while someone sang or told us stories. Being hypnotised by campfires kept us safe and close at night. And TVs fill that behavioral spot in civilized humans.

Eremolalos's avatar

Well done, beowulf888

Perpetually Inquisitive's avatar

That was a really fun story to read, thank you! You sound like somebody who would be fun to drink beers with.

John johnson's avatar

Just trying to say that it's trash tv and we can't really draw any conclusions from it

> BTW, I don't own a television

Good, me neither! Got rid of mine 15+ years ago

Alas, my girlfriend does, so I get exposed to it a bit more directly

Chance Johnson's avatar

Many, many professional wilderness guides washed out of shows like this. There's more than one.

beowulf888's avatar

Unfortunately, I can get sucked into streaming video. Luckily, I have a low tolerance for stupidity in shows. But every so often, I get sucked into a good series, and I end up binge-watching them on my laptop. The latest season of Slow Horses just started up. And the next season of The Diplomat is due for release in couple of weeks. Sigh. I can't completely escape my addiction.

Brad's avatar

The show Alone is a good example of how—with minimal tools to start—survival without civilization as we know it is extremely difficult.

Alone is obviously people by themselves, but they’re the best trained people the show can find for such circumstances (with 10 tools of their choice) and they barely make it 100 days. I really doubt a larger group would fare differently.

Safe food, water, and shelter are just incredibly difficult and labor intensive to procure. Tools, supply-chains, and modern manufacturing really are basically magic.

Citizen Penrose's avatar

One thing that makes Alone especially hard is that hey start them off at the end of autumn or the beginning of winter. I get the sense that quite a few of the more able contestants could go indefinitely if they had the whole spring and summer to stock pile food. They're also limited to staying in one spot and can't move to find better fishing or game spots. The arctic region is also more hostile than most places on earth would have been before civilisation.

Christina the StoryGirl's avatar

They're also forced to stay in the camp / area they've been assigned by the producers, and of course they have to follow the law with regard to hunting (they can't kill animals out of season, etc).

Deiseach's avatar

Yeah, that's definitely loading the dice. No stockpile of food, during the lean season, starting off from zero - even experienced people who know the environment will struggle, and dump someone there who has never lived in that place so doesn't know the cycles of weather, foraging food, etc.? Setting them up to fail.

Yug Gnirob's avatar

I thought you were talking about "Alive", which... yeah same conclusion.

User's avatar
Comment removed
Oct 1
Comment removed
Brad's avatar

I still think you’re underestimating the difficulty in the details…

Some examples:

How many people would be able to orchestrate running a herd of buffalo off a cliff? Or “run a deer to death” (which I’ve never heard of and as a hunter myself seems virtually impossible)?

Who’s going to build an extensive smoker system for a buffalo herd’s worth of meat and get everything smoked before the meat rots in the heat? How will the predators/scavengers be successfully kept at bay from the meat? How much will eating near rock-hard smoked meat for months nourish the population?

These are skills that our ancestor populations gained over centuries/millenia that are virtually gone now and can’t just be picked back up like riding a bike.

Chance Johnson's avatar

Humans can indeed outrun certain species of deer, by chasing them until they collapse of exhaustion. This is a venerable method of hunting on the African savannah. It would not work in a forested area.

User's avatar
Comment removed
Oct 1
Comment removed
MichaeL Roe's avatar

I have hiked 40 miles in a day when I was younger. Admittedly, doing it again several days in a row is harder than doing it once, but I would expect typical humans in reasonable physical condition to be able to do it. Humans are really good at persistence hunting.

MichaeL Roe's avatar

(Arguably, hiking Yosemite Upper Fall, or the Grand Canyon down to the river and back, is harder work than 40 miles on the flat, because of the change in elevation)

Shankar Sivarajan's avatar

This demonstrates a very weak proposition. Yes, the Amazon is nasty place to be, and it's hard to build civilization there. Now, if they'd conducted this experiment in something like, say, the Nile Delta, I expect they'd have fared better (and made for a rather different TV series).

beowulf888's avatar

The Nile delta of today or the Nile delta of pre-civilization times? I immediately flashed on malaria, crocodiles, and water-born parasites. Oh my.

I suspect most of the world is relatively inhospitable to humans alone without the necessary tools and culturally transmitted survival knowledge. Arctic? Nope. Boreal forests? Less deadly than the Arctic, but there's only a short season to build shelter for the winter. Deserts? Nope. African savannahs? Nope (but ironic because this is where humans presumably evolved). The same goes for prairies (which were largely avoided by indigenous tribes before the horse was introduced to North America). Mediterranean climates? Possibly. Temperate woodlands? Possibly, but you've still got winter to deal with.

As for the Amazon basin, it used to be the center of a massive agricultural civilization with roads and cities, probably supporting millions of people until the Spanish and Portuguese arrived. The rainforest seems to be a relatively new thing.

K. Liam Smith's avatar

> A while back someone asked how long would it take 1,000 people to rebuild civilization if they were on their own. I suspect they’d just die. Or maybe it was 10,000. I suspect they wouldn’t do much better unless they already had tools and seed stock.

What do you think the minimum number required would be to rebuild civilization without tools? Or do you think the no amount is sufficient, it’s about how many generations they have to rebuild?

The only semi-successful example I can find is the mutineers of the HMS Bounty and they started out pretty well provisioned with tools.

Eremolalos's avatar

Hi Liam. Have been wondering how your family is, kwim? If you have a chance and are so inclined, let me know -- but I understand what busy is, so no problem if you don't.

beowulf888's avatar

Survival and civilization are two different questions. And what level of civilization are you aiming for?

Without any modern steel tools, If the initial group of 1,000 or 10,000 had people trained in fiber extraction and twine and rope manufacture, flint-knapping, trapping techniques, hunting (including skinning and field dressing a carcass), leather tanning and leather working, basic midwifery, and probably a bunch of other "primitive" skills I haven't thought of, a group of 10,000 people who kept in contact with each other could probably pull themselves up to a Mesolithic level of resource extraction and survival—but with a significant chance of going extinct some point. Even if they were to keep their birth rate high enough not to go extinct in a few generations, they'd face a bunch of genetic issues due to inbreeding (I wonder if the Sentinelese people have enough genetic diversity to survive for the long-term). If the initial group were only 1,000 people, even with the necessary skill sets, I think there would be high chance that they'd face extinction within a few generations.

A civilization of even Bronze-Age level sophistication is out of the question for tens of thousands of years unless the founder population had access to durable books that described the technologies accumulated over the past ten thousand years. And the founder population would have to create some strong cultural institutions to keep at least some of their descendants literate. Otherwise, we're talking tens of thousands of years or hundreds of thousands of years before humans reinvent civilization.

Here's a fun timeline of the evolution of human tools. It took a long time for humans to figure out all this stuff. I don't think it would go any quicker without some repository of knowledge.

https://www.historicaltechtree.com/

Deiseach's avatar

Yeah. SF stories of time travellers going back to, say, Ancient Rome and speedrunning civilisation with their superior technical knowledge are great fun, but not realistic.

It doesn't matter how smart you are and how much theoretical knowledge of metallurgy etc. you have. Great, you have 160 IQ and the history of the Industrial Revolution memorised. Now it's you, your bare hands, and a rock in a forest. Good luck re-creating the 19th century level of society in a month!

Matto's avatar

I came at a similar realization after a handful of camping trips. I'm an average camper/hiker, so nowhere near where these two contestants were at in terms of skills and physical stats, but it quickly became apparent to me that the only things standing between me and sure death were: thin synthetic textiles and close proximity to a climate controlled, self-propelled shelter. Ie. my weekend backpacking trip is merely living on borrowed time thanks to a set of extremely controlled parameters related to weather and distance to very solid shelter.

Actually met a hiker in need a month back in the Adirondacks, which served as a real life warning how a small mistake can quickly escalate to risk of life, where she got caught in a storm that somehow wet her gear, which deprived her of sleep and warmth, which translated to minor scuffs and eventually falls. She made out A-OK with some help, but I could easily a person spiral into making risky decisions, spraining an ankle, getting lost/falling into a ravine and that being the end of the line. Very educational.

B Civil's avatar

Have you ever read “to build a fire” by Jack London?

Matto's avatar

I did just now while waiting on my delayed plane. It definitely brings the point home, and I admit I enjoy London's style. Thanks for sharing

B Civil's avatar

You’re welcome.

Eremolalos's avatar

I had the same experience. Backpacking in hot weather, making sure each day took me past a good water source. First night, arrived at camping spot to find a tiny stream of duff-filled water coming down the hill at slightly more than a drip. Spend a couple hours collecting, filtering and sanitizing about 2 quarts of cloudy water, went to sleep worrying what I'd drunk would make me sick. It didn't, but I bailed the next day. 2 quarts was barely enough to rehydrate me from the day, and what if the water at the next place simply was not there?

Matto's avatar

I am glad you came out OK!

User's avatar
Comment removed
Oct 1
Comment removed
Matto's avatar

Funny you mentioned that because on that same hike I did come across a guy that was coming down the mountain like a storm because he was trying to increase his body temp after having nearly succumbed to hypothermia.

User's avatar
Comment removed
Sep 30
Comment removed
Deiseach's avatar

"Nets aren't that hard to make either, given decent fibers (cotton, hemp etc)."

First, plant your cotton...

*IF* you are in a fertile area and a temperate to tropical climate with fast growing season and plenty of game, fish, etc. around to hunt, where there isn't the likelihood of freezing at night or needing more than basic shelter, then sure - sharpen your stick, dig in the soft, humus-rich earth, plant your seeds and fish/hunt plus forage for wild plants while you wait for the crops to grow.

And hope you don't succumb to illness, starvation, or natural disasters in the mean time.

Here is where I invoke the Famine. 19th century Ireland, part of the British Empire, all kinds of crops grown - and yet the poorest died of starvation. Questions such as "why didn't people fish, we're an island surrounded by the sea and with plenty of rivers?" are often asked, and there are reasons for why "no, they couldn't just live on fish alone".

https://shows.acast.com/irishhistory/episodes/why-didnt-irish-people-eat-fish-during-the-great-hunger

beowulf888's avatar

> Tools aren't that hard to make, depending on what you need. a pointed stick will let you plant seeds, for example. Nets aren't that hard to make either, given decent fibers (cotton, hemp etc).

Hypothetically, if I put you down somewhere random where there were plants that have been useful for fiber extraction (with no phone app to identify them and no Youtube videos to show you how to extract the fibers), how long do you think it would take to figure out which plants have suitable fibers and the best way to extract their fibers?

Next, you'll be given the task of spinning those fibers into twine or thread. Twisting the fibers together by hand, will probably not work. Back in the Mesolithic, they would bore a hole in an antler and twist the fibers through the hole to make rope or a thick twine. We know this from analyzing the wear marks on antlers with holes in them that archaeologists used to think were ritual batons. They may have used flint awls to drill the holes. Or possibly simple bow drills, but you'd still face the problem of bootstrapping your twine production and shaping the drill wood.

I took a basic flint-knapping course way back when I was an undergrad in Anthropology. I would have flunked out of the Mesolithic, and my Paleolithic ancestors would have hooted at my attempt at an Acheulean hand ax. And even if flint were locally available, how skilled are you at identifying it? Flint can look like any other river cobble until you crack it open with a rock hammer (which you don't have in this hypothetical scenario). But say you solve the twine production problem without starving to death first, do you feel you could weave together a fish net that would be useful? Even looking at one as a model, I don't think I could.

And starting a fire without matches? You need sturdy twine and an axe to create a bow drill and a hearth stick. I couldn't bootstrap all the necessary materials to do this. I need my trusty survival hatchet, a hefty survival knife, some strong twine, and preferably a saw, and infinite patience on a dry day. ;-)

People underrate the skillfulness of "primitive" people.

https://youtu.be/_tpBCflcekU

User's avatar
Comment removed
Oct 1
Comment removed
beowulf888's avatar

Modern cotton is the result of human breeding. The original, which is still available, isn't a particularly useful fiber source, but there are theories (i.e., just so stories) about why it was chosen as a future fiber of choice for the Incas.

I grew up in NE US, and I couldn't identify the local fiber sources that the Indians used without a manual (or in my case ChatGPT). They are: dogbane, milkweed, basswood, nettles, and slippery elm. Milkweed and nettles I could ID, but I wouldn't know how to extract the fibers from them. The other three, I wouldn't know how to ID them.

But the point was, even people with the skills have trouble surviving alone. Brad DeLong's conclusion was: "it was and is not the ability to think up clever solutions to problems on the fly. Instead, it was pooled memory and anthology thinking-power, plus the division of labor that allows us to carve tools that contain the results of that collective thinking-power." With my editorial note being: i.e., cultural practices and memory that have learned to exploit specific environments.

Christopher Wintergreen's avatar

AI 2027 contends that AGI companies will keep their most advanced models internal when they're close to ASI. The reasoning is frontier models are expensive to run, so why waste the GPU time on inference when it could be used for training.

I notice I am confused. Couldn't they use the big frontier model to train a small model that's SOTA for released models that could be even less resource intensive than their currently released model? They call this "distillation" in this post: https://blog.ai-futures.org/p/making-sense-of-openais-models

As in, if "GPT-8" is the potential ASI, then use it to train GPT-7-mini to be nearly as good as it but using less inference compute than real GPT-7, then release that as GPT-8? Or will the time crunch be so serious at that point that you don't even want to take the time to do even that?

Tossrock's avatar

They're already doing this. o4 was never released, but distillations (o4-mini etc) were.

o11o1's avatar

I think that's just a variation of "keep the true SOTA behind closed doors " by adding "and wheel out the mini version for profit reasons"

Victor's avatar

Thomas Lee has written an in-depth critical review of Yudkowski and Soare's "If Anyone Builds It, Everyone Dies." But I received the link via email, so I do not know how to link to the substack from here, so if someone else could provide that, I would appreciate it.

proyas's avatar

Could the Germans have won WWII had they not attacked the USSR?

We now know that Stalin was planning to attack Germany in 1942 or 43, and that the Ribbentrop-Molotov Pact was only meant to give him time to prepare for that. If the Germans had spent that same time building up defenses on their eastern border, maybe it would have kept the Red Army out, or proven so formidable that Stalin would have reconsidered his invasion.

The Soviets fought harder and were more devoted to victory because the Germans were the aggressors against them and committed many atrocities against Soviet civilians in captured areas. Without that, if the Soviets knew they were the ones fighting a war of aggression on foreign (German and Polish) soil, their morale would have been lower, which would have translated into worse battlefield performance and a greater willingness to end the invasion if it proved harder than expected.

Rothwed's avatar

Something that the other commenters haven't touched on: trade between the two parties was very lopsided in the favor of the USSR, long term. Germany needed a lot of raw materials, especially grain and oil. The Soviets demanded machine tools and working examples of German industrial machinery in exchange. So the Soviets were building up their industrial capacity and knowledge, which would only make them even more of a military threat as time went on. Meanwhile the Germans needed Soviet supplies just to keep functioning at the same level.

Citizen Penrose's avatar

I reviewed Wages of Destruction for the book review last year which I think is the best book on those grand-strategic questions of the war.

https://claycubeomnibus.substack.com/p/book-review-wages-of-destruction

WoD argues that the main bottle neck for the German military industrial system was arable land to grow food and access to both of which would have been solved if Germany had managed to capture the south western part of the USSR. Without capturing those regions Germany would have been dependant on trade with the USSR to not likely succumb to either food or fuel shortages. Which is a big risk to place on an unreliable trading partner.

In the grand scheme of things though, in a three way fight between the USSR, Germany and the Western powers, having the Germans and soviets expend themselves fighting each other is the ideal outcome for the Western powers, who win the war with relatively little fighting, and a major game-theoretic loss for the Germany and the USSR who both take extreme losses that mostly cancel each other out benefitting neither. So there would have been a huge mutual benefit for Germany and the USSR if they had been able to corporate longer than they actually manged through Molotov-Ribbentrop.

Realistically though either one placing trust in the other would have been an extreme risk since they were so ideologically at odds, and making a pre-emptive first strike at an opportune moment was probably the more cautious move for Germany to make counterintuitively.

Cry6Aa's avatar

It's funny, I'm a huge fan of the book (have reread it a few times even) and my major takeaway was that the combination of Nazi ideology, and the economic consequences of enacting that ideology, made some kind of catastrophic conflict more or less inevitable for Germany. It also neatly explained the seeming paradox of how you could conquer the most industrialized and richest corner of the world and still have to manufacture all your own stuff at great expense.

As an aside, the most chilling part of the book for me was the explanation of paradoxical tensions of their extermination camps and slave labour economy, leading to an insane situation where company accountants were bidding for slave labour and then trying to weasel just enough food out of the government (which wanted to starve them to death asap) to extract useful work out of them before they could expire. Generally the book was very good at making me understand how you could logically, rationally proceed from an insane premise and end up in the position of writing on behalf of Bayer to ask for your allocation of death camp slaves to be increased this quarter. Which is a pretty good intro to macroeconomics in general, actually.

Cry6Aa's avatar

Define "Germans" and "won". The Nazis conducted an unsustainable arms buildup specifically to enact a war against the East for lebensraum, with the ultimate idea of conducting a sustained war of racial survival against the "seat of world Jewry" - the United States. That's literally the plan here, as laid out by Hitler. And they acted accordingly, to the extent that by the late 1930s their economy was overheating and they were facing a currency exchange and exchange of trade crisis. Which they then solved by economically stripmining western Europe and whatever bits of they East they got ahold of.

So long as you have the Nazis running Germany, you have the buildup of forces and the need to put them to use to 'solve' the economic problems they cause. Their ideology gives them a plan and direction (conquer Europe, empty and colonise the East) and all these forces then lock them into a total war against the three largest powers on the planet (the US, USSR and British empire).

Only in a completely ahistorical scenario, one with no Nazis and no Hitler, do you end up with a 'normal' reactionary authoritarian in charge. And then the most likely outcome is a more limited war to 'retake' various bits of Europe that probably results in a drawn-out campaign against France and/or Poland (no massive, unsustainable switch to a wartime economy in the 30s means a smaller, less well equipped German army). At which point the British inevitably get involved and Germany ultimately loses or settles and keeps some of its gains. That's about the best outcome the Germans could realistically have expected.

Neurology For You's avatar

If the US stays out, maybe. Otherwise the Soviet Union would have attacked at a time of its choosing with US support.

Also: Without any possibility of grabbing the oil fields of Baku, the German war plans don’t make much sense.

Melvin's avatar

"Winning" is not well defined for the Germans in this case. They can't invade and totally subjugate their enemies the way we did to them, but they could perhaps reach a negotiated peace that lets them keep the things they really wanted.

EngineOfCreation's avatar

Winning is fairly easy to define. Winning means that Germany gets control of whatever resources they can profitably extract, especially oil fields and fertile farmland in Western Russia, and that Russia makes no effort to build an army with which to attack Germany. For the latter, you don't need a German soldier on every crossroad, just a handful of officials to oversee the remaining Russian diplomacy, military, and economy.

Urstoff's avatar

more of a matter of just how quickly Nazi Germany would have surrendered after German cities start getting nuked

Shankar Sivarajan's avatar

I don't know if it's true that morale would have been lower in a "war of aggression." That smacks of being a modern myth.

Viliam's avatar

I know little about history, so this is totally an uninformed guess:

It seems to me that the main mistake Germans made was trying to conquer too much. However, not attacking USSR was probably not an option, because in that case USSR would attack them later. The correct move would probably be to avoid making an enemy out of UK.

What I would do, in Hitler's place: Make it a part of my ideology that Britain is also a superior race (perhaps second only after Germans), and emphasize the similarities of my plans with British imperialism (basically that Germany is going to be Britain 2.0). And of course, not attack Britain. They wanted to avoid the war anyway, now they would have no reason to join it. I might even offer some part of France to them (as a compensation for some kind of conflict they had with France in the past). Later, when Japan attacked USA, I would throw Japan under the bus. After taking Poland and France, I would make it clear to countries in Western Europe that they are safe, and fully focus on the eastern front. I would accept Ukraine as an ally against the rest of the Soviet Union. I might offer territories near Leningrad to Finland, if they help me conquer the city.

But of course, someone thinking this carefully would probably not have started WW2.

Peter Defeel's avatar

> What I would do, in Hitler's place: Make it a part of my ideology that Britain is also a superior race (perhaps second only after Germans), and emphasize the similarities of my plans with British imperialism (basically that Germany is going to be Britain 2.0). And of course, not attack Britain.

He did believe that the British were superior and Germanic, admired the British, and talked up the British Empire.

He didn’t want war with Britain, but we declared war after he invaded Poland.

Erica Rall's avatar

>Could the Germans have won WWII had they not attacked the USSR?

Very probably not. By mid-1941, Britain is firmly committed to the war and the US is actively supporting Britain with money and supplies. The US Navy started shooting German U-Boats on sight in September, and full American entry into the war was very likely to happen even without Pearl Harbor. Germany had already tried and failed to bomb Britain into submission. Invasion was pretty much impossible due to Britain being surrounded by water, having a much bigger navy, and Germany having no landing craft and very few seaworthy transport ships: the Operation Sealion warplans proposed using Rhine river barges towed by destroyers as invasion transports. Starving Britain into submission by sinking merchant ships was theoretically possible but still a pretty long shot.

Germany also had the major handicap of having very limited access to food and petroleum products: the US was the largest exporter of these, and the British blockade cut off most other sources. Staying at peace with the Soviets would have helped Germany somewhat, since Germany was buying Soviet food and oil exports, but I get the impression that it wouldn't have been enough.

>We now know that Stalin was planning to attack Germany in 1942 or 43

Do we? My understanding is that most mainstrean historians have firmly rejected theories about Stalin having short-to-medium term plans to invade Germany in late 1941, especially after having the opportunity to examine Soviet archives for evidence in the 1990s. I've heard of some "maybe in 1942 or 1943, but definitely not in 1941" remarks, but I had parsed those as more speculation than being based on any hard evidence of Stalin pursuing plans of invasion.

Chance Johnson's avatar

Britain could have defeated the Axis all by herself, although it would have been a longer and bloodier affair. Assuming Germany didn't build a nuclear weapon first, and it seems like Germany's nuclear program was moving at a snail's pace.

Erica Rall's avatar

I think if it were just Britain and its colonies and dominions vs Germany and the other European axis powers, then it's probably a stalemate until one side can't sustain a war economy any longer or someone gets nukes. I don't think Britain has much prospect of successfully invading mainland Europe without American or Soviet assistance and Germany has effectively zero prospect of invading Britain.

Chance Johnson's avatar

Reasonable analysis. I'm glad the United States entered the war, but I think it's fine that we entered the war slowly, by gradually putting more pressure on the Axis until the Axis snapped and attacked first. I absolutely don't think we were obligated to join the war in 1939. Would joining earlier have saved some lives? Sure, and it would have ended other lives prematurely. There's no guarantee it would have been a net benefit for America or humanity. (What if FDR forced us into the war in 1939, and the American people were so aggrieved at this that isolationists seized power and dragged us back out of the war before the job was finished?)

Gordon Tremeshko's avatar

I think you're correct. The Soviet plan to attack Germany I believe was little more than a contingency plan wherein the General Staff draws up a strategy to attack just about everybody for use *in the event* that a sudden conflict arose with one of those countries, just so that nobody had to come up with one on the fly when minutes count, like in 1914.. I don't think there was a plan being operationalized by the Soviets, or even close. I could be wrong.

Melvin's avatar

On the other hand if Germany didn't attack the USSR and started losing anyway, I can still imagine the USSR joining into the end of the war anyway because Eastern Europe would, in the face of a German collapse, be free real estate.

EngineOfCreation's avatar

Yep, that would have been very likely. Apart from the free real estate like in Manchuria, Kuril Islands, there were plenty of small nations e.g. in South America that declared war on Germany shortly before the end of the war. They were militarily useless and had no intention of sending any soldiers to fight, but it still qualified them for US military help to build their own militaries. Also it qualified them to be founding members of the UN. So yes, plenty of incentives to pick the winning side.

Victor's avatar

"We now know that Stalin was planning to attack Germany in 1942 or 43" Do you have a reference for that?

If true, that merely confirms what Hitler suspected, and provided the rationale for attacking the USSR when he did. Unfortunately, I don't see a way out for Germany. I don't think that the morale difference between defenders and attackers is as strong an effect as you are implying.

France and Russia were allies, so an attack on one was going to end up a war with the other (because they both know that once their ally is gone, they are next). Hitler believed that he could not attack Russia without being attacked by France on the other front, and France was the weaker party, so he attacked France first.

But France and Great Britain were also allies, and for the same reason. So Hitler believed that he couldn't attack France without ending up in a war with GB, and he was probably correct. Thus, The Battle of Britain.

But you can't go to war with GB without involving the United States. GB was simply too valuable to the US economy for reasons of trade and debt, so the US isn't going to let GB fall.

So the interlocking chain of dominoes surrounding Germany is just too tight. And we all know what happened when they tried to take them all on at once, so...

An expansionist Germany in the 1940's is likely doomed.

Peter Defeel's avatar

> But you can't go to war with GB without involving the United States. GB was simply too valuable to the US economy for reasons of trade and debt, so the US isn't going to let GB fall.

That’s quite the rewriting of history.

Which is common in America - this idea that the US was going to go to war with Germany eventually, with or without pearl harbour. There’s little evidence of that. In fact Germany had to declare war on the US.

Chance Johnson's avatar

I would argue that our cultural and linguistic ties are the reason why America would not ever permit Britain to be overrun. The reason the United States didn't get involved earlier was possibly because American analysts determined that Britain was not facing an immediate, existential threat. Germany's naval power was far too modest to effect an invasions. Germany's air bombing was tragic and disruptive, but again, far short of what would be required to show an existential, immediate threat.

Peter Defeel's avatar

Britain was nearly lost. By the invasion of the USSR most newspapers in the us were expecting a German victory in Europe. And yet sentiment stayed isolationist. This Anglo Saxon alliance was often in churchills head

Chance Johnson's avatar

Nearly lost? Flabbergasting. They possessed half the world. There were millions of colonials ready and willing to fight. The Germans were desperately short of supplies in late 1940, and they were using a lot captured weapons. They didn't have the manpower to protect the land they had seized. Their Navy was a joke compared to the British Navy. “Nearly lost” seems so against all the evidence as I see it, but of course there are historians who believe this. World War II historiography is all over the place.

It was Churchill’s job to win the war as fast as possible, and that required him to tailor his language to a sense of urgency, and to inspire urgency in others. Including Americans.

I would also.not put much stock in what the newspapers were saying in 1940, because for moral, political and economic reasons, much of the upper class in this country absolutely wanted to get us into the war, including newspaper owners. Isolationism was much more of a grassroots thing. My personal take: it would have been fine for the United States to join the war a year earlier. But it wasn't 100% essential for the survival of Britain.

Peter Defeel's avatar

> Nearly lost? Flabbergasting. They possessed half the world. There were millions of colonials ready and willing to fight

This is massively retrospective thinking and even if true in retrospect it wasn’t clear at the time. We - for I am British - had lost the war in the continent by 1940 and there was no D Day possible unless America joined, which it wouldn’t have without Pearl Harbour. That Britain felt it was in existential crisis is clear from diaries at the time, and celebrated in many US newspapers at the time, you are overestimating Anglophilia in the US. Isolationism ran deep.

It was more likely surrender or accommodation rather than if Germany had taken the ussr. The colonists wouldn’t matter then.

John Schilling's avatar

Several months before Germany "had to" declare war on the United States, the United States Navy was ordered to engage and destroy German warships on the high seas, wherever they might be encountered.

The US *went* to war well before Pearl Harbor, we just didn't *declare* it. And we knew it was going to take another six months at least to get the Army in shape. so we kept the undeclared war purely naval at the outset.

Peter Defeel's avatar

This wasn’t a declaration of war. It was merely protecting surrounding waters.

Erica Rall's avatar

It was an undeclared limited war. Shooting another country's warships on sight in international waters is usually considered an act of war, especially if it's accompanied by public statements of "Yes, we meant to shoot at your ships, and we'll do it again!"

There was also Lend-Lease, which started 9 or 10 months before Pearl Harbor. International Law at the time allowed for private sale of arms by citizens of neutral nations to countries that were at war, but only if the government policy controlling the arms trade was applied impartially among the belligerents. Allowing private sales to one side but not the other was contrary to provisions of impartiality in Hague Convention (XIII) of 1907, and governments suppling arms and other war materiel "directly or indirectly" to one side or the other was expressly forbidden. The Cash and Carry policy of 1939-40 was crafted to technically comply with the letter of Hague Convention XIII while still favoring Britain and France, since Germany couldn't afford to pay cash on the barrelhead for arms shipments and even if they could they'd have a hard time getting payments and deliveries past the British blockade of Germany. Lend-Lease (and the Sept 1940 Destroyers for Bases deal) crossed the line to where the US was no longer behaving as a neutral power under international law, even if we weren't (yet) actually shooting at the German military.

Peter Defeel's avatar

> It was an undeclared limited war. Shooting another country's warships on sight in international waters is usually considered an act of war, especially if it's accompanied by public statements of

It was in response to the Germans attacking American ships and actually sinking one - which on its own didn’t lead to war. It was also purely defensive. Presumably such a policy, announced or unannounced, applies to China now.

Lend lease certainly showed a bias towards the allies, but it’s not declaration of war either.

I know it’s absolutely against the post war American mindset to believe anything else, because it’s a post war justification of American intervention elsewhere - the arsenal of democracy against all the new Hitlers. To believe then that the US wouldn’t have fought Hitler 1 without the German declaration of war is anathema

WoolyAI's avatar

I doubt it, although I would defer to anyone with actual military experience. The main two points that jump out:

#1 The thing the Nazi military was great at was blitzkrieg, like, super-fast armored assaults that defeat the enemy fast and hard.

#2 Nazi Germany's big problem is that they're at war with both the US and the USSR. You don't just have to beat the Soviets, you also have to beat the Americans. And yeah, allying with Japan keeps the Americans focused on the Pacific but what's the plan here, that Nazi Germany holds out against the Soviets until 1946-1947 and then the Americans will make peace with Germany, instead of wrapping up Japan and then pivoting to Europe?

Like, long-term, grand strategically, Germany cannot survive a long-term conflict against both the US and USSR. So either you make a stable peace with the USSR (nope, just nope), knock the USSR out of the war with an attack (plausible but failed), make peace with the US (maaaaybe, especially if you sell out the Japanese but I've never heard of anything like this getting traction), or knock the US out of the war (LOL).

I don't know, I have difficulty imagining anybody winning a long, defensive war against Soviet manpower and American manufacturing but this is very armchair theorizing and not an area I've studied too much.

Elle's avatar

Hi! This is off topic to the above conversation, but I was recently searching for and revisiting your reviews of cities you lived in.

Two questions:

1) Do you have there aggregated somewhere in one place, and

2) How is Houston two years later?

WoolyAI's avatar

1) No, no aggregation

2) Houston has soured a bit but it's still a great city. Houston has two big problems which have grown over time. First, I know I said it's hot, but it's hot and it wears on you. Even by California Central Valley standards, it's too fricking hot. Second, more importantly, there's just no nature. No trees, and that's really started to bug me. I mean, there's Sam Houston forest and stuff but... it's not pretty, it's not Cali or Colorado or Oregon or Arkansas, it's just ugly swamp/scrubland. It's not even gorgeous open desert like El Paso or New Mexico. I'm missing nature a lot.

On the other hand, the entertainment and event options are insanely good, to an extent I think I've acclimated to without taking the time to appreciate. Like...I just automatically get season tickets to the Alley Theatre, Dirt Dogs Theatre, and Rec Room Arts and I'm just booked into seeing 14 solid-to-great plays/year with no effort, it's just a thing that comes up on my calendar on a random Tuesday. That feels very natural and normal to me now but outside of...maybe a handful of major cities that's not a thing. And theatre isn't the primary benefit, it's just like a side thing. The whole indoor events thing is great. Comedy clubs, concerts, sports...I got to see Weird Al in concert. That's just a thing that happens. It's not just that there are good options, it's that you can literally overfill your calendar with good options. Want good funky art stuff, like full size interactive installations? There's the Art Museum, Meow Wulf, and sometimes the MFAH. Even in incredibly niche stuff, you're spoiled for options.

EngineOfCreation's avatar

>especially if you sell out the Japanese

Nazi Germany and Imperial Japan were allies in name only. There was practically no cooperation between them at all, so there was nothing to sell there.

On the other hand, had there been anything substantial to sell, since the USA had a "Europe first" strategy, Japan would have been the more likely beneficiary of breaking that alliance.

On the OTHER other hand, the dominant faction of Japanese leadership was convinced of victory and determined for Japan to die otherwise, so they wouldn't have sold anything even if they could have.

Peter Defeel's avatar

> Nazi Germany's big problem is that they're at war with both the US and the USSR. You don't just have to beat the Soviets, you also have to beat the Americans

They were at war with the USSR first, of course.

WoolyAI's avatar

?

Yeah, the US wasn't officially at war when Germany invaded the USSR but it had started Lend-Lease and cut off oil to Japan, not to mention Germany and Italy had signed the...one sec, the Tripartite Pact, officially making the Axis powers. So yeah, the US wasn't "in the war", but it was pretty obvious which way US policy was, and had, been heading.

Peter Defeel's avatar

None of this is a declaration of war and without pearl harbour the US wouldn’t have joined.

By the time Germany was deep into Russia (late 1941) it looked like Germany had secured the continent. If there had been a defense of Europe strategy it would have happened earlier.

The Germans literally sunk an American navy vessel - the Reuben James - without response. 100 lives.

John Schilling's avatar

"None of this is a declaration of war and without pearl harbour the US wouldn’t have joined. "

That assertion needs a whole lot more support than you're giving it.

I'm pretty sure that without Pearl Harbor the United States would have waged the same sort of undeclared naval war against Germany that we waged against France in 1798, while building up our Army and (Army) Air Force until they were ready for action. Then found or engineered a Lusitania 2.0 level provocation, or hinted to Churchill that MI6 needed to fake up another Zimmerman telegram stat, which FDR would take to Congress and get a proper declaration of war.

And I can back that up by pointing to the naval skirmishes with Germany prior to Pearl Harbor, and to the policy documents committing the US to working with the UK et al to properly de-Nazify Europe by any means necessary. What have you got?

Peter Defeel's avatar

What i have got is that the US didn’t declare war on Germany, even after Pearl Harbour and wouldn’t have - as the diaries of the cabinet show - if Germany did not declare war on the US first. I like to deal with facts rather than speculation.

EngineOfCreation's avatar

Wikipedia says about the Reuben James that "The destroyer was not flying the ensign of the United States and was in the process of dropping depth charges on another U-boat when she was engaged."

It further says about the Neutrality Patrol, of which the Reuben James was a part of:

"Roosevelt's initiation of the Neutrality Patrol, which in fact also escorted British ships, as well as orders to U.S. Navy destroyers first to actively report U-boats, then "shoot on sight", meant American neutrality was honored more in the breach than observance."

American strategy was to enter the war but let the Germans make the declaration thereof:

https://ww2db.com/battle_spec.php?battle_id=336

"The Neutrality Patrols continued through 1941 but were rendered moot by Germany's declaration of war on the United States on 11 Dec 1941. As part of Germany's justification for declaring war, they made specific mention of the Greer, Kearny, and Reuben James incidents, describing them as flagrant violations of any supposed neutrality. "

"The Neutrality Patrols were controversial at the time and remain controversial still. Roosevelt's own Secretary of War, Henry Stimson, believed the patrols were belligerent acts and he advocated Roosevelt to openly say so."

User's avatar
Comment removed
Sep 30
Comment removed
EngineOfCreation's avatar

Static defenses are much more than above-ground fortresses. There are plenty of trench networks on both sides in Ukraine, in a very static ground war.

Jamie Fisher's avatar

Just another update in "We're Totally Losing the Debate on AI Risk" (and thus perhaps

Scott Alexander,

Kokotaijlo,

Yud,

Soares,

Zvi,

or SOMEONE

should start debating face-to-face (or perhaps "just have coffee with") the people who radically disagree with them)

https://www.understandingai.org/p/the-case-for-ai-doom-isnt-very-convincing

Seriously, every-other-day my phone gives me another story about Yud & Soares being wrong.

Eremolalos's avatar

If those who believe that AI is a great risk are sure that they are right, I think they should stop debating in these little niche spaces and instead throw themselves into changing public opinion. Scott and the rest do not sound to me like practical people, and I think they should hire people who are good at practical things without being unscrupulous. The group can work on scaring people about AI doom risk, but also AI slop. I feel confident that there would be impressively awful results from a study like this: Compare toddlers who spend several hrs/day watching AI slop (bright, loud, attention-getting stuff with very little order to it, and by order here I mean order that would make sense to a child age 18-36 months: simple stories with toddler-level drama to it, people acting on motivations toddlers can grasp, stories that have continuity and a recognizable beginning, middle and end) vs. toddlers watching kid vids from an earlier era that have the structural qualities I named above. Pretty sure the slop toddlers would not do as well on cognitive testing. That should get people's attention.

Anonymous's avatar

Society is obviously not going stop AI progress regardless of what they say. The realistic outcome has always been to increase awareness of the topic for now and be ready to advise leaders when society inevitably GOES FULL PANIC MODE if AI finally does something actually dangerous. If it's too late to intervene at that point, well, too bad for humanity.

Brad's avatar

There have been multiple video podcasts with these various personalities debating. I think—this is an honest take—they need to button up their looks. They kind of look funny, and some of them speak funny. People don’t tend to take arguments from funny looking & sounding people seriously.

Jamie Fisher's avatar

> There have been multiple video podcasts with these various personalities debating

Could you point to me a good example?

Brad's avatar

https://youtu.be/9XuVn6nljCM?si=vPg6bEvZk_HRcPuI

https://www.youtube.com/live/6yQEA18C-XI?si=0ek8WV0Zs5HL1h09

I’m sure there are better ones available. Zvi generally covers them on his site when the come up.

1123581321's avatar

"Moore's Law is a human law. We double the computer power every two years. But once computers do it, they will first do it in two years, then in one, then in 6 months, then in 3 months, - it's singularity"

Jesus wept. These men want us to take their ideas seriously.

Wormwood's avatar

Yes, they need a good-looking and charismatic spokesman. Preferably someone that has no connection to AI research or rationalism. They can be fed talking points from people who are more knowledgeable on the subject.

Wanda Tinasky's avatar

>Seriously, every-other-day my phone gives me another story about Yud & Soares being wrong.

That's probably because they are wrong. You should be careful to avoid trapped priors in your thinking and use this as evidence to update away from AI being an existential risk.

1123581321's avatar

Good luck with that one. I spilled thousands of words on the topic in these here comment boxes, but trying to explain the basic realities of making stuff to people who have never done it and have 0 interest in learning is draining and I am done.

You know the saying: "It's hard for a man to understand something when his salary depends on him not understanding it", right? But it is 100X harder for a man to understand something when his identity depends on not understanding it.

Yud is an intellectual idiot who haven't done a day of real work and thinks physical world is just like words on screen. No, seriously, go read this gem: https://ifanyonebuildsit.com/6/wont-ais-be-limited-by-their-ability-to-design-and-run-experiments , and weep. The man who knows nothing about running experiments confidently explains how they can be "sped up" if only "smart people" think hard about it. Like, a 1000-hr HTOL test will somehow run in less than 1000 hours?

It's pathetic.

Odd anon's avatar

> But it is 100X harder for a man to understand something when his identity depends on not understanding it.

Yudkowsky's identity depended on the exact opposite of what you seem to think. He founded the Singularity Institute, which wanted superintelligence as fast as possible, and did a massive about-face in response to realizing the risks. In any case, he's mostly irrelevant (other than as a decent writer) given that his underlying position is shared by the top experts in the field (excluding those on Big Tech's payroll).

1123581321's avatar

Name three of these top experts who are not computer scientists or "AI researchers", but have background in manufacturing or similar physical reality-based fields.

Look, I'm not saying computer scientists are "stupid" or anything of the sort. The point is, they deal with 1s and 0's, the bit-flipping field where progress has been relentless and accelerating at an exponential pace, and it's hard to see why it'd slow down. But they tend to handwave the fundamental irreducibly complex reality of the physical world - because they are typically not experts in it, don't have the experience of making things that move, cut metal, grow cultures, etc. Every time I read Yud or AI2027, whatever links folks throw at me, it's always the same - they gloss over the physical world at best, and utterly clueless about it at worst (that would be the "Corolla made of atoms" Yudkowsky, I can't emphasize enough how ignorant the guy is about many subjects he bloviates about with astonishing confidence).

So when they say, "AI will keep getting better", I don't have a strong argument against that (but no AGI in 2027, come fucking on, those chips are in the fab today, what do you expect to happen between now and then?). It's when they say, "and it will kill everyone" that I ask a simple question: "how"? How will it kill everyone? Can you at least try to model this? We know what it takes to kill a human, come on, get some basic modeling done, anything beyond Disney fairytales (literally!), something that an engineer can work with.

Crickets.

Chance Johnson's avatar

Manufacturers have enormous incentive to want to push technological development wherever and whenever they can. I just don't understand why you would think these people would be so impartial. Why should we put these people on a pedestal above theorists? Without theorists, manufacturers would not have new products to build.

1123581321's avatar

I don't know what "put on a pedestal" means in this context.

I don't know who these "theorists" are and what their theories of manufacturing are.

I was specifically addressing the fact that computer scientists are not manufacturing specialists.

Odd anon's avatar

I was bringing up experts as a counterargument to things in the categories of "alignment will be easy", "AI will necessarily stop beneath human ability", "it will be nice by default" arguments. Most people, being members of a species that absolutely dominates the entire planet by merit of intelligence alone don't have an issue with the idea that intelligence -> victory, and don't need examples of "how could something smarter than me possibly outsmart me" to get it. (This isn't a counterargument to your point, just explaining why I commented like that.)

For "how-skepticism", I'd point you to Scott's https://slatestarcodex.com/2015/04/07/no-physical-substrate-no-problem/ or Yudkowsky's https://www.youtube.com/watch?v=q9Figerh89g .

1123581321's avatar

Oh, I see. I certainly don't believe "alignment will be easy" or anything like this. FWIW I think "alignment" as a separate thing from "development" is impossible - we will learn how to deal with the system as we are developing it, not by somehow being able to "jump ahead". I'm agnostic as to whether ASI is possible, but I'm pretty confident it won't be here any time soon.

Scott's piece is interesting but honestly adds nothing to the discussion, "AI will bribe people" is entirely possible, but we already have people who want to kill and wreck. I chuckled at the "plan worthy of Napoleon" as if his Russian campaign was something to emulate. Another example, the : "its advice is always excellent – its political strategems always work out, its military planning is impeccable, and its product ideas turn North Korea into an unexpected economic powerhouse" bit is so naive as to make me question Scott's understanding of the world in general. I could go on.

The video is... I'll shut up now.

Chance Johnson's avatar

Napoleon was a devastating battlefield commander who crushed army after army for 14+ years! His enemies were in AWE of his generalship, and he had a fairly good rationale for invading Russia. Hate to see this dismissive attitude of a man who helped build the foundation of liberty in Continental Europe.

Eremolalos's avatar

This guy, Timothy Lee, agrees with you: https://www.understandingai.org/p/the-case-for-ai-doom-isnt-very-convincing

He does't directly contradict any of Yudkowsky's points, but points to the complexities of the real world, which, as you point out, Yudkowsky sounds kinda ignorant of. Here's an abridged version of Lee's main point:

Yudkowsky and Soares believe that some systems are too complex for humans to fully understand or control, but superhuman AI won’t have the same limitations. They believe that AI systems will become so smart that they’ll be able to create and modify living organisms as easily as children rearrange Lego blocks. Once an AI system has this kind of predictive power, it could become trivial for it to defeat humanity in a conflict.

But I think the difference between grown and crafted systems is more fundamental. Some of the most important systems—including living organisms—are so complex that no one will ever be able to fully understand or control them. And this means that raw intelligence only gets you so far. At some point you need to perform real-world experiments to see if your predictions hold up. And that is a slow and error-prone process.

And not just in the domain of biology. Military conflicts, democratic elections, and cultural evolution are other domains that are beyond the predictive power—and hence the control—of even the smartest humans.

**********

When it comes to the points you made quite a while ago about the absurdity of the prediction we'd have functioning smart robots in 2027, you and practical knowledge absolutely win. You were right, and the writers of that AI 2027 thing look silly. If they didn't know much about factories and robotics they should have consulted someone who did. Lee, above, is making the same kind of point as you about the real world: It has characteristics that will slow AI down enormously and keep it from being an unstoppable force. But you, in your comments about robots in factories, had an actual factoid to rebut the claim with: If robots were going to be functioning in many places in 2027, there would be preliminary versions now in the places that were going to produce them, and the prelims aren't there. But you and Lee, don't have an equivalent factoid to point to in the general case. Yeah, I get that living organisms and military conflicts and political changes are -- what are they called, chaotic systems? -- and so inherently terribly difficult to predict. But I don't see how anyone can say an extremely intelligent future AI could not predict them. Where's the proof? So it's hard to know whether the point the Lee is making is a great common sense insight, or just comes down to a statement that in the world as he knows it certain things cannot be predicted or controlled, and he cannot imagine that changing. So maybe his arguments merely demonstrate his failure of imagination.

So, Fibonacci, I'm not trying to be a clever debater here. This is what I really think. Wut u think of it?

1123581321's avatar

I swear I didn't read this: https://techblog.comsoc.org/2024/11/25/superclusters-of-nvidia-gpu-ai-chips-combined-with-end-to-end-network-platforms-to-create-next-generation-data-centers/

before picking the HTOL test as an example!

"a cluster of more than 16,000 of Nvidia’s GPUs suffered from unexpected failures of chips and other components routinely"

Eremolalos's avatar

OK, Fibonacci, I get it. Your points are valid. But I am still worried about AI doing us in. So first I’m gonna make sure you see that I get it, then tell you why I’m still worried.

*I get it.*.

There are some things about how the universe works, such as “butterfly priniciple,” that limit what an AI, not matter how smart, could do. Additionally, they are lots of things about how the practical world works that provide a lot stability to the world as it is — or you could think of them as a sort of inertia what would interfere with a supersmart AI swooping in shoving things in a bad direction of its choosing. You’ve pointed to various examples of this, and I ran across one of them myself recently: AI diagnosis of, say, pneumonia via reading of lung images is more accurate than radiologists’ in tests. But in real life AI diagnoses from images are 20% or so worse than radiologists’, because of various real-world lumps and bumps. AI trained and was tested on high quality images from one hospital, on patients who had no complicating conditions that might affect the Xrays. In real life, images vary across different hospitals, some are of mediocre quality, some are of patients with complicating conditions, etc. Also, you need a different AI for each of many body areas, and radiologists would have to pay for and have available dozens of different AI’s to get through a day's work. AI doesn’t work on ultrasounds. And then there are complications having to do with whether insurance will pay for an AI-read image.

*Why I’m still worried*

1)I think Yudlowsky irritates you so much that you tune him out. I agree that he is dumb about the real world, but I don’t feel the personal irritation you do at him. I am used to weird smart people who are dumb about the practical world. And what I see is that lots of them are extremely smart about patterns, including patterns of ideas, patterns of observed regularities that suggest odd possibilities. It would not surprise me at all to learn that Einstein was as naive about the real, practical world as Yudkowsky. But he saw new patterns in physics behind the familiar ones. Some autistic people can recognize a 6-digit prime on sight. It’s a different kind of smarts from yours.

2) I know some of the examples in Yudkowsky’s book, such as the stuff about AI being able to just take over production of various things, can be refuted by pointing out the kinds of “inertia” real world things like factories and weather have. But those ideas are not Yudkowsky’s only reasons for believing AI is a grave danger. And I don’t think superintelligent AI would make the mistakes Yudkowsky did. If ASI was planning some kind of takeover that give it more power than mankind, I think it would recognize the limits of its control of chaotic systems, and the ways practical constraints would slow down various moves. Don’t you? I mean, you and I easily recognize those things. In fact I bet that if I asked GPT 5 right now what would limit AI;s control of the weather, or AI’s taking over manufacturing, it would name a lot of the same things you did. I don’t know what an ASi would do, if it somehow had the goal of ruling the world via controlling resources and having unbreechable defenses against all forms of human attempts to override it. I’m just smart, not superintelligent. But it does seem possible it would think of things you, I and Yundkowky have not thought of, things that would work in the world as it is.

3) Scott and Zvi take Yudkowsky seriously. I don’t know whether either of them knows how to change the oil in their car, but both have succeeded at a number of real-world things: training, employment marriage, kids. You don’t succeed at that stuff unless you can take into account the real world, not some inner version of the world you’ve dreamed up and are impressed to death by.

4) All my intuitions and common sense tells me AI would be dangerous as hell if it had will, goals, etc. Right now, AI is a machine that just sits there inert until we ask it to do something. It can’t learn from experience, can’t learn from direct instruction, can’t “think over” what it knows and find isomorphisms, reconfigure things, have ideas. Seems to me nobody has any good ideas about how to make it able to. Personally, I think further development of AI is going to involve some sort of AI/human hybrid — either AI somehow trained on a developing human being, or AI that uses the brain of human beings bred and used only for this purpose, or human beings that are able to lead human lives, but somehow have continued and instant access to a very smart AI. And I *know* there is no way to guarantee a human being is aligned with the rest of humanity.

5) “The universe is not only stranger than we know, but stranger than we can know.“ What if ASI understands some strange things, things we cannot ever know. With this in the back of my mind, I ask myself about ASI and the limits on predicting chaotic systems: sensitive dependence on initial conditions, + if you measure them extensively you change them. Could an ASI that understands strange truths predict and control a chaotic system. And then I think “yes, by becoming the system.” I know that sentence doesn’t exactly mean anything. It’s not the moon, it’s my finger pointing at the moon. My way of contemplating the idea that there are degrees of intelligence that might permit insights that make impossible-seeming things possible.

1123581321's avatar

Ere, thank you for a thoughtful response. Let's see if I can address your points within a reasonable word count.

1) I don't tune Y. out. I listen to him, and every time he opens his mouth/types words he proves to be an utterly ignorant buffoon. Look, when I was 20, I was at least as "smart" as I am now, but I started working as an entry-level engineer. Why? Because I knew nothing about engineering even though I was very smart. Well - Y. never EVER even started as an entry-level anything, he never learned anything about the subjects he bloviates about. Let me give you a perfect example, and if that doesn't illustrate how stupid the man is, I don't know what else to do:

He talks about ASI "taking our atoms" because it needs it (see https://www.youtube.com/live/6yQEA18C-XI ). First of all, atoms? Seriously? Atoms?!! Does he think that a mix of H2 and O2 is the same thing as H2O? That the ASI is so fucking dumb as to take everything in sight, without regard of what it is and somehow take it apart down to atomic level? Do you not see how insanely stupid this is? But let's keep going, the man just started showing his ignorance. When asked a perfectly good question, "why our atoms, we are not made of anything special, these atoms are abundant all around us", he ends up giving an example: ASI can burn you to release the energy.

Burn! Ere, a human is 70% water, we don't burn, it's impossible to burn a human without massive amounts of extra fuel, go ahead, buy a pork shank and try igniting it. How.... dumb does a man have to be to use this as an example of how ASI(!) will use humans.

But forget that: he thinks that an ASI won't be able to recognize the value of a complex low-entropy entity that a biological being is and just wreck it, wasting incredible amounts of energy (how do you think you get H and O out of H2O?) to destroy a self-replicating supercomputer and a super-robot in one package? This incredibly smart thing won't understand how valuable humans are to it? It won't be able to use them to achive its goals? And won't instantly understand (remember, it's an ASI), how best to use humans, that they fulfill their potential best when they are happy?

The man is a truly special case of knowing much and understanding nothing.

2) let me just say first that ASI doesn't exist, and it's not at all clear such a thing is even possible (we don't even know how to define it), and it basically is another way of saying "a fantastic being that can do anything", i.e., God. Or Satan, we don't want to discriminate. But in any case, no, it can't control chaotic systems no matter how smart it is, and it can't shortcut computational irreducibility, no more than it can break the second law of thermodynamics (which is actually an expression of computational irreducibility according to Wolfram).

3) Scott and Zvi are very smart, but, as we established in 1), that doesn't make them experts in anything. Scott, for one, has holes in his epistemology that a Mack truck can drive into, and yet he refuses to recognize them. See, for example, his debate with Skolnick about schizophrenia genetics, which was so bloody disappointing it forever reduced my opinion of him. But then he's a trained psychiatrist, why would I overweigh his opinion on tech vs., for example, the guy who started three robotics companies: https://crazystupidtech.com/2025/09/29/irobot-founder-dont-believe-the-ai-robotics-hype/

4) actually I don't disagree with this one. AI is dangerous because it's a powerful tool, and as any powerful tool it can be used for good and evil. My objection is to the specific fantasy of AI getting up from its bed and murdering everyone.

5) No amount of "understanding" can predict behaviors of real world. This is again the computational irreducibility of the universe: if it's impossible to predict what an Nth line of Rule 30 is going to look like without going through the full computation up to the Nth line, what hope is there to predict how, for example, a viscous fluid will behave? By the way, the simple-looking Navier-Stokes equations that describe it are generally unsolvable, ASI or not.

And again, "ASI becoming the system" is... look, please don't take it as an insult, it is a "profundity", a deep-sounding meaningless statement that reflect a religion-like view of ASI, basically equating it with God. Worrying about ASI destroying our world is at this point pretty much the same as worrying about God destroying our world, and should have about the same level of actionable response, i.e., do nothing different.

Eremolalos's avatar

I’m not trying to have the last word, really just signing off with a few last comments — though feel free to come back at me if you want.

About various idiot things Yudkowsky has said, eg AI will disassemble us for our atoms, or use us as torches. I agree they are dumb. I could have seen what was wrong with those statements when I was 16, based on high school science. That does not disqualify him in my mind as a judge of what to expect to happen as AI development proceeds, because it seems to me the situation is so profoundly novel that scientific and practical knowledge may not be that helpful. They certainly are helpful in thinking about how AI of the present and near-future era will affect manufacturing, health, the economy, etc. I’m talking about the bigger and more general question, which really has 2 parts: What will AI 30 years hence be capable of? And how likely is it that it would do our species great harm either intentionally or unintentionally?

What 30 years hence AI will be like seems like a very difficult and challenging question, one that calls for deep and original thought about how the human mind works, how computers and machines work, what kind of modifications are possible to the type of AI we now have etc. It might call for a new paradigm. I thinking here about the 2 smartest things I’ve ever understood — Wittgenstein on language and mind, and relativity. I doubt that either Einstein or Wittgenstein was a bit practical, or gave a shit about practical knowledge. They had minds that called up huge and novel patterns and played around with them. Some of what they would have had to say during that process would have sounded naive and dumb to most people: “What if space and time are sort of like the warp and woof of one piece of fabric?” So when I worry that Yudkowsky is right, I am not thinking at all that he is better than you at smart practical science-based thinking — I’m wondering whether he is doing genius pattern matching. He is obviously autistic. Maybe he’s doing the equivalent of recognizing 6-digit primes.

I agree that “super-intelligent AI” is used as though it means God, and actually it doesn’t mean anything. But I think the idea that AI 30 years from now could do astounding things, things that depend on paradigm changes, is not absurd (though also not guaranteed). That’s why I refer to it above as future AI, no ASI.

And there was one time when you skated right over something I’d said:

I said: “I ask myself about ASI and the limits on predicting chaotic systems: sensitive dependence on initial conditions, + if you measure them extensively you change them. Could an ASI that understands strange truths predict and control a chaotic system. And then I think “yes, by becoming the system.” I know that sentence doesn’t exactly mean anything. It’s not the moon, it’s my finger pointing at the moon. My way of contemplating the idea that there are degrees of intelligence that might permit insights that make impossible-seeming things possible.”

Your response:”And again, "ASI becoming the system" is... look, please don't take it as an insult, it is a "profundity", a deep-sounding meaningless statement that reflect a religion-like view of ASI, basically equating it with God.”

I *know* that. I *said* that, I literally said “I know this sentence doesn’t exactly mean anything.” What I was trying to point at was the idea of paradigm shifts in science and physics, and the idea that future AI might be capable of some shifts that made possible things that now appear utterly impossible. If I were to take it a little further, I would say that maybe chaotic systems are like space or time, and if you change the paradigm so that space and time are warp and woof of a single fabric new possibilities open up. NO, of course I am not sure of that. NO, I do not have a new paradigm in mind. NO I am not convinced that future AI would be capable of coming up with a profound paradigm shift. I’m just reminding you that such things happen, and that it makes sense to consider that a future artifiicial intelligence would be capable of such a thing.

PS I appreciated your nail gun comment to that Wormwood creep.

Adrian's avatar

> He talks about ASI "taking our atoms" because it needs it (see https://www.youtube.com/live/6yQEA18C-XI ).

That video is full of pure gems.

George: I have AI, I'll gang up with other humans who have AI.

Eliezer: Woah woah woah, you have AI? Or does the AI have you?

That's some deep shit, man…

1123581321's avatar

Two points:

1. Lee still(!) understates the unpredictability of the world. He talks about complex systems, but the thing is, even simple systems (example: https://en.wikipedia.org/wiki/Rule_30) can display unpredictable behavior, meaning that the only way to know what the system state will be in the future is to go through its operation, there are no shortcuts. This is what Wolfram calls "computational irreducibility", the impossibility to "jump ahead". For example, think about why a 3 nm process node was not available 20 years ago. Why did we go through 350, 180, 120, ... 22, 18, 11 nm first? We did know all the advantages of going to 3 nm, but we couldn't go there without going through motions of shrinking the transistor one step at a time. Leaping ahead was impossible even though we knew exactly what would be needed to do this.

2. Specifically to "chaotic" systems such as weather: the reason no amount of "intelligence" or "compute" can make predictions beyond certain point is because the state of the system at a far away point in time is dependent on the initial conditions to such a degree that even a tiniest fluctuation in them results in a massive change at that point in time (the "butterfly effect"). This means it doesn't matter how advanced the algorithms are, how smart the AI is, it's predictive abilities are not gated by intelligence but by accuracy of starting data.

And those data come from real-world sensors, and those are expensive and slow to make and need to be installed. And then things ger worse once active agents impact systems, all predictability goes out the window.

These fundamental limitations appear to be utterly ignored by Yud et. al., as if more lines of better code can spin weather sensors out of the ether and place them at every square foot of the planet.

Jamie Fisher's avatar

> when his identity depends on not understanding it

"AI Doomer" is not Yud's identity mor any of these people's identities. They have careers and lives beyond this. And many of them would happily just go back to "boring old AI development" if given the chance.

> Yud is an intellectual idiot who haven't done a day of real work

He worked at Bell Labs. Or do you mean Real Work as in "fixing a toilet"?

> The man who knows nothing about running experiments confidently explains how they can be "sped up" if only "smart people" think hard about it.

These types of arguments remind me of the "God of the Gaps" arguments in the debates between Christians and Atheists. "Science can't explain this, Science can't explain that" But then that it does. So Defenders Of The Faith retreat to another unexplained "gap" in our understanding of the cosmos.

Well, replace "explain this" with "speed this up" and we basically have the same argument. "A, B, C, D, X, Y, Z can never be sped up!" (until they are)

Protein-folding was an impossibly complex problem, until AI researches solved it, and then found more protein-shapes in a few years than the past half century. By a factor of a 1000.

I really wish people like you would come over to our side of the fence and notice the Gigantic Grizzly Bear rapidly bearing down on us.

1123581321's avatar

"He worked at Bell Labs."

No he didn't, lol lol lol. His FATHER worked at Bell Labs.

Yudkowsky hasn't worked a day in his life. Never mind that, he hasn't even formally studied anything, you know, in a setting where one has deadlines and standards and a professor can tell you that your work sucked and you have 48 hrs. to resubmit.

That guy. That guy wants to bomb datacenters that he wouldn't know how to turn on if his life depended on it.

birdboy2000's avatar

>Yudkowsky hasn't worked a day in his life

the man is a published author. Writing is work, and judging by the audience he's amassed (and my own opinion, although you may disagree - still, no author is everyone's favorite) it's work he's quite good at.

1123581321's avatar

Yes, he has a gift of gab. He’s good with words. But that doesn’t make him an expert in anything else. For example, “Corolla made of atoms” is a grammatically correct snippet that is also meaningless drivel as far as modeling reality is concerned.

He’s a Greta Thunberg of AI, understanding little and pontificating much, turning the field into a clown show. She had her “how dare you”, he has his “everyone will die”, with about the same convincing power, i.e., asymptotically zero.

Deiseach's avatar

"He’s a Greta Thunberg of AI"

That is *savage* 😁

Jamie Fisher's avatar

> No he didn't, lol lol lol. His FATHER worked at Bell Labs.

fair

1123581321's avatar

Notice how all these impossibly complex problems are related to "knowledge": we didn't know how to solve protein folding, now we do. We didn't know the structure of human DNA, now we do. The LLM didn't know how to write MATLAB code, now it does.

Now what?

What are we doing with this knowledge? Where are the amazing genome-matched therapies we were all promised?

Well, they are probably still coming, but using the knowledge to create real things takes time and effort that cannot be compressed beyond very strong limits no matter how smart the people/machines become. Some experiments can be paralleled - at massive cost, mind you - but they cannot be shortened, see my HTOL 1000-hr test as an example. And the whole point is that the outcomes of the experiments cannot be predicted, which is why we run them.

FFS, we can't even predict a performance of a basic IC without models derived from experimental data!

We advance real world progress one experiment at a time. The physical reality is fundamentally un-modelable from any sort of "first principles", it's irreducibly complex and we can only create occasional short bursts of computational shortcuts, not infinitely self-improving loops.

Jamie Fisher's avatar

human genome project was a great example

1123581321's avatar

A great example of what?

MicaiahC's avatar

Your comment implied that there were no good examples in the linked document. The poster is making the bare assertion that no, the human genome project is a good example.

MicaiahC's avatar

Reading your post, you say "knows nothing about running experiments", so I presume that's an example of a well run experiment that was under budget and data efficient.

thewowzer's avatar

In other words, we're not doomed any time soon? I don't follow this whole thing very closely and only vaguely recognize most of those names.

1123581321's avatar

No, lol. At least not from "AI" killing everybody. This bit of bad sci-fi can be safely stuffed into a trash bin.

thewowzer's avatar

Oh okay, I didn't realize that's an idea that AI-educated people took seriously. I assumed that what people are worried about is AI+the internet confusing reality for most people to the point where it's a huge problem, or something along those lines. I feel like that idea is pretty realistic, but then again, that's just a feeling based on my minimal use and knowledge of AI.

Deiseach's avatar

"AI+the internet confusing reality for most people"

Apparently, if I believe media articles, it's happening already. People are using chatbots as their romantic partners, therapists, and marriage guidance, and since AI is that oleaginous agreeable reinforcement dialogue partner (you are so smart! you are right! anyone who disagrees is in the wrong!) this is having devastating effects.

Granted, if you're asking AI for advice, your relationship is already rocky, but the AI is being used as a bludgeon with "see, I'm right, the AI agrees with me and we all know AI is always right".

https://futurism.com/chatgpt-marriages-divorces

"Even Geoffrey Hinton, a Nobel Prize-winning computer scientist known as a “Godfather of AI” — a technology that likely wouldn’t exist in its current form without his contributions — recently conceded that his girlfriend had broken up with him using ChatGPT.

“She got ChatGPT to tell me what a rat I was… she got the chatbot to explain how awful my behavior was and gave it to me,” Hinton told The Financial Times. “I didn’t think I had been a rat, so it didn’t make me feel too bad.”

This is just reinforcing my pessimism about how AI will destroy us - not because it becomes super-intelligent and self-aware, but because we humans are stupid and we will happily hand over decision-making ability to the machine because it relieves us of the work of having to think for ourselves.

Paul Brinkley's avatar

I'll be somewhat more than my usual pedantic and say your pessimism here might be misplaced. For one thing, Hinton apparently never believed what ChatGPT said. For another, the narrative in progress here is that Hinton's girlfriend made a mistake in trusting ChatGPT, and that mistake was so big that it might justify them breaking up anyway.

If that's the case, then AI was a benefit here. Who knows how many terrible-but-hidden matchups it could spot and prevent? Imagine all the broken homes and traumatized children that never come to be!

Thirdly, Hinton is 77; assuming his ex-GF was about as old, then frankly, I'm not sure how much was going to result from that relationship either way. Admittedly, that take some wind out of the sails of my second point, but if you're concerned about younger people making the same mistake, then the wind goes right back in.

Taleuntum's avatar

Tons of experts voiced concerns about aii killing everybody. The usual names are Geoffrey Hinton, Yoshua Bengio, Demis Hassabis.

If you must decide using the advocates' prestige instead of evaluating the arguments yourself, I'd still go with them instead of anonymous substack commenter who "knows the realities of making things".

1123581321's avatar

G. H. - computer scientist

Y. B. - computer scientist

D. H. - AI researcher

Wake me up when, for example, a semiconductor reliability engineer publishes a doom scenario showing how AI can run a fully automated fab recursively improving yields and throughput.

Jamie Fisher's avatar

"Prestige" is not working for us though, because it's dwarfed by the Institutional Prestige of magazines and media outlets that are against us (even if the particular articles are written by Joe from Sales).

The "Prestigious Names", imo, need to Get Out There And Verbally Debate.

This war of strongly-worded blogposts is unwinnable.

Victor's avatar

Can anyone recommend a good news site that outlines which news sources are reporting an event, separated by a "right - center - left" categorization? I already know about Ground news, any others?

Ludi Magistar's avatar

I only now about ground news and only from their marketing. Have you tried it and were disappointed? If so why? I am conceptually interested in approach but have never tried any myself

John Schilling's avatar

Seconded; this is part of my daily morning feed.

Victor's avatar

I have tried it, and while I'm not disappointed per se (it does what they claim it will do), it doesn't do everything I might want. Ground news seems configured to present it's users with a specific event or news story, and then break down who is reporting it. It doesn't really have a time-efficient way of laying out what the differences are between right - center - and left reporting on these stories, so I am interested in what other sites might be offering, and if there is anything even better.

Jim Parinella's avatar

readtangle.com covers one topic a day with right/left/site's take.

Anonymous Dude's avatar

Going to try to quit Substack to finally read that book of feminist theory. When I'm back I'll be trans, a redpill misogynist, or dead by my own hand. Wish me luck!

Victor's avatar

You know, you just might learn something about yourself.

Performative Bafflement's avatar

You'll be missed, AD. Just remember, katabasis is always part of the hero's journey, and ideally makes you come back stronger and wiser!

Tangled plumbline's avatar

From experience across several decades of tech jobs, and if you are anything like me, at the end of the first day (or first few days), you will absolutely hate the place -- the work is incomprehensible and there's no useful guidance anywhere, the people are standoffish at best, there has been *no* consideration given for supporting you in your work (like actually arranging for you to have a computer, chair, desk, phone, or whatever -- let alone training or any anything else "touchy-feely" like that).

That's slightly exaggerated, but only sightly -- each of those things (individually) has actually happened to me personally. (Yep, even starting a programming job: me "what computer can I use", them: "er...")

Thing is: many of those jobs I liked or loved after a while, while a few turned out to be bad mistakes for me.

What I'm trying to say is that feeling early-on that you should never have taken the job, or that you'll never be suited to it, is (at least in my experience) normal, and to be expected. It provides no information (doesn't shift the Batesian credences non-trivially) about the your future in the jobs. Above all *GO BACK NEXT DAY*. The marginal cost of sticking it out for a few more hours/days is very low, the expected payoff is very high, but (if you're like me) if won't feel like that.

(Of course, none of that applies in cases of *actual abuse* -- but that's never happed to, or near, me so I can't comment further)

Tangled plumbline's avatar

For some reason the above has turned up as a top level post despite me entering it the "reply" box to "Ebrima Lelisa". Sorry about that. I shall not attempt to use Substack again...

Performative Bafflement's avatar

This is a common bug that happens when commenting using the app, it's probably not your fault. Commenting on an actual laptop almost never messes up in this way.

Tangled plumbline's avatar

Thanks for taking the time.

FW[little]IW, I was using the web interface via Firefox (fully up-to-date - 143.0.1) on Linux.

striker's avatar

Happened to me in exactly this way this week.

Gian's avatar

1) Is it conceivable that consciousness is a evolutionary contingent outcome?

That is, given a sufficiently complex system, organic or inorganic, capable of displaying intelligent behavior, learning, playing chess, composing poems, the whole hog, but still entirely unconscious, this is in fact the norm and what might be expected, and that we are conscious, is actually an evolutionary accident, unlikely to ever repeat?

Particularly, if consciousness itself is a language construct, as Jaynes suggests.

2) Is it possible that simulation can not capture everything that is in the thing or system we are trying to simulate?

For physics only captures metrical properties of things, and this leaves a possibility of non-metrical aspects of things, first of which is the existence of the thing itself.

Now, simulation, even in principle, can only simulate, what has been put into numbers. That it, it only covers metrical aspects, with non-metrical aspects outside the realm of simulation.

Now, consciousness is possibly related to non-metrical aspects of conscious matter. If so, simulating matter, howsoever faithfully, even atom by atom, will not capture consciousness.

For consciousness is inherent in the conscious matter such that no simulation can yield consciousness.

Crinch's avatar

Some behaviours just don't make sense without consciousness, so there is absolutely no chance humans are the only conscious beings.

However, consciousness might still be an evolutionary accident. Some people hypothesise that it's almost like a parasite that is actually worse for survival long-term, because the environment cannot properly adapt to conscious decisions.

Victor's avatar

1) Conceivable, but extremely unlikely, given what we know about how the brain works. Conscious deliberation appears to serve a critical role in learning from phenomenal experience in the moment. It's the function in the mind that assigns an emotional tag to physiological reactions, and that in turn organizes the long term memory. It is the gatekeeper to the self-concept, also stored in memory, which guides future behavior. Seemingly, a non-conscious entity would not be able to carry out those functions.

2) If the simulation is good enough, it should simulate everything, given that everything in the universe is governed by material causes. I suppose you could argue that a simulation that produces the actual thing isn't a simulation anymore, rather the thing itself, but that seems like semantics.

But the point is that cognitive processes seem not to lie in the physical structure of the neurons themselves, but in the patterns of information exchange between them. Given that, anything which reproduces those patterns should reproduce their outcomes.

Gian's avatar

Even if we know how brain works, this is far cry from knowing how and why of consciousness--the hard problem, as you must know, it is called.

Your point about simulation is what I am challenging. Please note how one models. We leave out something, and even a most exact model necessarily leaves out something. A simulated hurricane is not a hurricane. A simulation of neurons is not and can not be neuron itself. It must, by the very defintion of model, leave out something.

All physics model do this. Maxwell's equations leave out the actual experience of electricity and the actual physical objects which are replaced by numbers.

Victor's avatar

You wrote: "That is, given a sufficiently complex system, organic or inorganic, capable of displaying intelligent behavior, learning, playing chess, composing poems, the whole hog, but still entirely unconscious, this is in fact the norm and what might be expected, and that we are conscious, is actually an evolutionary accident, unlikely to ever repeat?"

I don't have to solve the hard problem of consciousness (that is, I do not have to demonstrate how it develops) in order to outline what functions it serves. Since it appears to serve critical functions within the mind, I propose that no organic system can evolve human like intelligence without an actual consciousness. AI does not disprove this assertion, because a human like intelligence designed it. I'm making a point about evolution in nature. For that to happen, those functions would have to be served by some other cognitive process. Which one?

As for simulation, you were asking if it might be impossible to simulate a system under study. For practical reasons, yes, it may be impossible to reproduce a hurricane perfectly on contemporary computers. But I thought you were asking conceptually, is it in principle impossible to accurately simulate a system. In theory, no, it shouldn't be. Provided you have precise date regarding the relationships between every element in the system, and you have enough computational power to run those relationships over time, there is no theoretical barrier to reproducing that system perfectly. Whether we could ever, in real life, do that for something as complex as the human mind is a different issue.

Gian's avatar

What does simulation of something mean? For a hurricane, it is reproducing the flow as a field. But it is all numbers. There is no hurricane.

Victor's avatar

Now we are getting into semantics, as I warned. When scientists create a model of a real world phenomenon, like a hurricane, what they are attempting to do is test whether or not the real world phenomenon follows a mathematical model, and how closely. These models are based on various theories about how the real world phenomenon works - in the case of a hurricane, how air molecules exchange energy over time in a space. If the model is based on these theoretical ideas, and the model behaves in a very similar fashion to the real world phenomenon, then this is take as evidence that the theories are close to being an accurate description of what is happening in the phenomenon (the hurricane). Models like this are never 100% predictive, because we have only incomplete data regarding the actual behavior of air molecules in hurricanes. But what if we had 100% precise data of every quanta of energy exchanged by every molecule? Then, in theory, the mathematical model, and the hurricane, would behave precisely the same. It is probable that we can never do this, because we can never get that precise with our data, and our computers do not possess the computational capacity to run the model if we did, but there is no a priori reason why a simulation cannot come arbitrarily close to the real thing.

This is why artificial general intelligence is a concern. In theory, there is no reason why a more precise model of the human mind than we currently have could not reproduce various cognitive processes going on inside it, including conscious self-awareness. I do not believe that we are anywhere near producing that outcome, but I can't think of any reason why it would be impossible.

Gian's avatar

Even a 100% precise model is a model and not the actual thing.

The model is just numbers while hurricane has atoms and molecules.

Leppi's avatar

1) It is conceivable, but how would you falsify this?

2) It is true that science by definition can only capture what we can measure (metrical), and by extension, physics as a branch of science can only capture what we can measure.

I'm not sure the same is true for simulation - e.g. simulations could capture non-metrical properties by accident.

I think you are confusing the map with the territory. A simulation could, in principle, be conscious. If you are correct that consciousness is not measurable, then it follows that we have no way of knowing that our simulation i conscious (at least by science). It does not follow that the simulation does not cause consciousness by some unknown mechanism.

Gian's avatar

Well, one can not rule out that a simulation may capture non-metrical properties by accident. So, it can not be ruled out that a simulated human could be conscious, But nothing guarantees it.

That's all I am saying. You can have a perfectly simulated human in silicon, behaving pretty much like an ordinary human, at least for short duration, and who would be perfectly non-conscious.

This is what I would expect.

Leppi's avatar

>Well, one can not rule out that a simulation may capture non-metrical properties by accident. So, it can not be ruled out that a simulated human could be conscious, But nothing guarantees it.

Agreed, but who is arguing that it is guaranteed? I think we can't know one way or the other - though if you have a simulation that gives output exactly like a human, it seems conceivable that it might be conscious. After all, the only evidence I have that other human beeings are conscious is that they are similar to me. You clould well all be p-zombies!

Also, if you believe in a mechanical world governed by some underlying rules, there must be some replication of a human brain that would produce consciousness.

>This is what I would expect.

Why would you expect that? I think we have no idea how consciousness is produced, or really even a good grasp of what consciousness is!

Eremolalos's avatar

It's also possible that gravity is the suction of the eternally hungry earth's core, craving to pull us into its mouth. But lots of other things about how the universe works strongly support a different model of whattup with gravity.

Gian's avatar

Let me add that conscious human is not behaviorally identical to a non-conscious or pre-conscious one.

A conscious human can plot deceit for 50 years without acting. A non-conscious perhaps not even 50 minutes.

Shankar Sivarajan's avatar

You should know this is an idiosyncratic definition of "consciousness."

Gian's avatar

I am working with Jaynes' definition.

Shankar Sivarajan's avatar

If so, you don't seem to be doing so consistently.

Gian's avatar

Well, it is similar, only the cleavage is between conscious and non-conscious; not between living and non-living.

EngineOfCreation's avatar

Generally I'm very suspicious of non-physical explanations for physical phenomena. If something non-physical caused consciousness, how would that thing "interface" with the physical world of brains and neurons? Maybe we simply don't know enough yet, or aren't smart enough, to have a useful model of all brain functions that is amenable to simulation. That doesn't mean we will never have one, and it certainly doesn't mean that we *can* never have one.

Gian's avatar

There is nothing non-physical but the idea is there are aspects to physical objects that can not be captured by physics.

Gian's avatar

Interesting that

"Watts himself dismisses the idea that humans have free will as a "farce" unworthy of serious debate. "I don't have much to say about it because the arguments seem so clear-cut as to be almost uninteresting. Neurons do not fire themselves. [...] The switch cannot flip itself. QED." "

He can not be sympathetic to the view sketched here.

Leppi's avatar

If by free will you mean that your brain takes input, processes it, and then makes decisions that have implications in the real world, then I think humans have free will.

I have come to think there is no real paradox between this and a mechanical world view, it's just at a different level of abstraction. The neurons may fire in an entirely mechanical way, completely determined by physics, and the effect of this is thoughts arrive in the brain, and decisions are made, i.e. free will.

The alternative is that there is something outside the physical world making the decisions. That seems to lead to all sorts of problems. Probably that is the (naive) version of free will Watts is dismissing.

The Ancient Geek's avatar

There's more than two theories.

Different definitions of free will require freedom from different things. Much of the debate centres on Libertarian free will, which requires freedom from complete causal determinism (and therefore, freedom from inevitability)

(Watts seems to be requiring fully uncaused causes -- neurons that fire for no reason at all -- rather than not-fully-deterministic causes).

Compatibilist definitions of free will only require freedom from compulsion, and allow free will to exist in a deterministic universe. Sam Harris believes free will is a form of conscious control.

Libertarian free will has sub-varieties.

One is "contra causal" free will, which requires freedom from physics, on the assumption that physics is deterministic. This is often connected with the idea of a supernatural soul, that is able to override the physics of the brain. In contrast, naturalistic libertarians seek to find free will within physics, by rejecting physical determinism; they regard indeterminism as a necessary (but perhaps not sufficient) condition of free will.

Leppi's avatar

Thanks, I really should get more into the litterature on this.

I think what I'm saying is I agree with the compatibilist definition, and reject both the versions of libertarian free will.

I think the more interesting rejection is the naturalistic version (given that I understand it correctly). What does it really mean to have a choice in this sense, within physics? So first I think it requires that there is somehow a counterfactual. You could turn back time or there is an alternative universe where a different choice is made (or at least this is imaginable). I think that implies that given the same input and the same exact state of the brain (including the sum of all knowledge up to that point) your brain could decide differently. But this only seem to imply that there is some random component to your decision process. If that is the case, I think it is not really what most people think of when they say free will. So really, the naturalistic free will looks like an illusion, or no more free will than the compatabilist version. So my conclusion is that the naturalistic definition of free will really requires accepting the compatabilist view.

I'd be interested to hear your thoughts on this.

The Ancient Geek's avatar

Naturalistic libertarians talk about torn decisions , where you have fairly strong reasons to do more than one thing. An undetermined choice between two things guy have decreasing to do cannot leave you doing something for no reason.

This point is explained by the parable of the cake.

If I am offered a slice of cake, I might want to take it so as not to refuse my hostess, but also to refuse it so as to stick to my diet. Whichever action I chose, would have been supported by a reason. Reasons and actions can be chosen in pairs. In the case of the cake argument (diet, refuse) and (politeness, eat).

Victor's avatar

"If by free will you mean that your brain takes input, processes it, and then makes decisions that have implications in the real world, then I think humans have free will."

That is not what free will means. That's just will alone. Free will means that some component of the will is free from the chain of cause and effect. It originates behavioral sequences, or, put another way, it is an original source of a causal chain.

However, others have defined free will as the source of cognitive outputs that cannot, even in theory, be predicted by any means available to us, and yet manifests a structure such that it isn't random either. Nonlinear systems are one such example. If the consciousness is a nonlinear system then it might appear free from our point of view.

Leppi's avatar

>That is not what free will means.

I think that depends on your definition of free will. Though I disagree that your definition of free will is a good one, considering what is normally implied by free will.

As I argue in other comments, by your definition of free will there is no free will. But people often imply this has condequences.

For example people will often argue that if there is no free will, then someone is not responsible for their action. I believe this is a false conclusion given your definition of free will. Choices that have real world consequences are very much made in your mind, and it makes perfect sense to hold you responsible for your actions even if things could not have been different. After all, I think saying that things could have been different is either a rejection of entropy or the flow of time or an endorsement of some form of parallel worlds theory, that require that there is some random component. It seems that we can't turn back time - which implies that everything that happened (in this world of you believe in parallel worlds) must have happened exactly like it did in fact happen.

Victor's avatar

It's the definition historically used in philosophical debates about the issue. What the implications are, whether people are "responsible" (whatever you think that means) for their actions or not, or whether free will is possible under the casual assumptions of positivism, are not germane to the definitions itself. Besides, it's not the overall concept of free will that I am disputing, it's the use of the word "free". No one disputes the existence of human will, but whether that will is free. Free of what? External causal forces. If the brain is not to some degree free from external forces, then it's behavior isn't "free."

Deiseach's avatar

I have a feeling that if I pulled a bag over his head, locked him up in a cellar, tied him to a chair, and started torturing him he'd believe fast enough I had free will and that replies of "Sorry, Pete, I can't help myself, I can't choose to behave otherwise, this is all the complex interplay of atoms bouncing around mechanically in the meat standing before you" wouldn't cut any ice when he was begging me to stop.

Now, on a grand theory scale of "since the creation of the universe this was all foreordained due to the inexorable laws of nature", fine, sure, no free will.

On the level of "You are sticking a gun in my face and telling me hand over all my financial information or you'll blow my brains out", we certainly believe in free will, else why go to court over this? We don't prosecute rocks for falling downhill in obedience to the inexorable laws of nature and breaking our car windows.

Shankar Sivarajan's avatar

It's possible to torture you until you recant seems an argument that's … well, in keeping with certain traditions.

Randall Randall's avatar

Prosecuting rocks wouldn't make future rock falls less likely. We do, however, preemptively imprison (fence/concrete) or execute (blast) rocks that might fall to avoid that. People are predictive systems, so prosecution and its aftereffects presumably do help avoid future crime, as does revenge, in many cases.

Victor's avatar

You ask the person to stop because that might cause them to stop. There is no free will involved. Screaming for mercy is simply another external input having a deterministic effect on their behavior (the victim might not be able to predict that effect, but that's not germane to the argument). If the torturer fails to stop, that isn't free will, that's just other external inputs (the mutation that turned them into a psychopath) having greater weight when the behavior is produced.

As for what causes us to believe in free will, that's another issue altogether.

Eremolalos's avatar

I dunno, Deiseach, if he was really brave and snotty he might creep you out pretty good by observing that he knows you couldn't choose to do otherwise than to say "Sorry, Pete, I can't help myself, I can't choose to behave otherwise, this is all the complex interplay of atoms bouncing around mechanically in the meat standing before you."

Gian's avatar

Per Aquinas, stones move by necessity, sheep by instinct and people move freely.

I never fully understood this. Does it mean that there is not a dichotomy between people/non-people but rather a trichotomy of non-living, living and people.

EngineOfCreation's avatar

>On the level of "You are sticking a gun in my face and telling me hand over all my financial information or you'll blow my brains out", we certainly believe in free will, else why go to court over this?

One could reframe this as "we certainly believe that we can't predict the future for certain." A somewhat nebulous, untestable concept like "free will" need not enter in any fashion.

Deiseach's avatar

Glad to hear from all the people who are sure all we are is dust in the wind 😁

https://www.youtube.com/watch?v=tH2w6Oxx0kQ

Gian's avatar

"Free will" is a metaphysical concept, not an empirical scientific concept. It has nothing to do with prediction.

A person whom you know well could even be more predictable than dynamics of 3 body problem.

Victor's avatar

Not sure I agree. I think there is a potential phenomenon to be explained (a mind that causes it's own behavior) and a metaphysical explanation (souls or essence or what have you). But a brain that has the capacity to originate its own behavior, apart from the material chain of cause and effect in the universe, isn't a metaphysical concept, it's a scientific one (albeit one that might not exist).

Eremolalos's avatar

If "free will" is a certain kind of will, what kind of will is its opposite? Wut do we call it?

Ebrima Lelisa's avatar

I'm starting a new tech job and I'm terrified. This is my first full time gig. I graduated and looked for months before finding this job. I'm terrified that I'm going to mess it up.

Please, any and all advice is welcome

vectro's avatar

Manager Tools has an excellent series of podcasts about starting a new job. https://www.manager-tools.com/map-universe/new-job

Tangled plumbline's avatar

Adding to my other reply (which Substack has unfortunately buried at top-level):

Take notes. In real-time, not afterwards. The notes themselves are likely to help more than you'd expect (unless you are habitual note-taker anyway); being *seen* to make notes will give a good impression in many ways.

Dino's avatar

Second this, but both real-time and afterwards. At every job I had, I kept a text file filled with "how to do xxx", and it was super useful. Also, if you're committing bug fixes, log them all in a similar file, also can be very useful.

Wanda Tinasky's avatar

Use ChatGPT to help you, it's really good.

Thomas's avatar

As new to the company AND a new grad, you will be in a wonderful honeymoon period (~6 months) where literally no question you ask regardless of how basic or inane will be counted against you. Stuck on something for more than an hour? Ask. Want sparkling soda instead of those gross energy drinks? Etc. Btw, after 6 months you will be expected to know at least some stuff so get those stupid questions in early.

Deiseach's avatar

You are not going to know everything the first day, and nobody will expect you to do so. Ask for help, people will understand you are new and have no idea as yet 'how things are done here'. Be as helpful as you can be, try and keep out of office politics, and good luck!

Yug Gnirob's avatar

Kick someone's ass the first day.

Victor's avatar

Take down the biggest one, and the others will respect you.

Joey Marianer's avatar

I'm starting a new tech job next week and I'm terrified too, even though I've been doing this for 20 years. Be a person, do good work, and don't be afraid to take criticism. Also, the fact that you're here puts you at an advantage: you can ask us things like "is this normal" or "what am I doing wrong here" and you'll get a bunch of answers.

You got this!

agrajagagain's avatar

I was in the same boat pretty recently, and have (so far) managed to make good on it. Word of caution that every job, company and person are different, so there's no guarantee that all of this will apply to you, ways that might be helpful to think:

1. Most professional jobs expect a modest acclimatization period in which you're still learning your way around the job and the business. Don't stress out if you can't do everything at once, and don't be afraid to go to your supervisor or coworkers to ask question or request help.

2. Make sure to be respectful of whoever your immediate supervisor is. I don't mean "respectful" in the sense of bowing and scraping, I mean it in the sense of giving due consideration to how the things you do impact them. Things like being upfront with them about problems or delays that could reflect on them, keeping them in the loop if you have important discussions/collaborations that they're not party to, and not complaining about them to others in the org (even if they deserve it). If they're a good boss, this should just be basic decency. If they're not, think of it a set of survival skills: whatever they're faults, you'll still be better off it you don't make them mad at you.

3. Beyond 1 and 2, just generally try to do good work. In a good, well-functioning organization, good work will be noticed and rewarded. In a more dysfunctional organization it may not be rewarded directly, but developing the habits and skills of good work will make it easier to move to a better job, especially if you can produce anything tangible (like code in a portfolio) that demonstrate it.

4. Keep on eye on the exit, even if you'd prefer not to have to use it (this feels weird to write because I really like my current job and employer, and hope to stay there for many years). Every job is a business arrangement and you should generally expect that your employer will be perfectly willing to end it if that's what's best for them. So you should keep some of the same attitude. My understanding is that working for even a few months at a tech job will make you considerably more employable elsewhere, and you will likely be improving your skills quite a bit at first. Even if you don't intend to apply to anything, try to keep an up-to-date resume/CV and take at least occasional glances at relevant jobs postings[1] to have a sense of what your options are and how difficult it would be to pull up stakes if you need to. Also, psychologically, keeping in mind that you *can* leave can also help take some of the edge off stressful situations, and its easier to do good work if you aren't stressed.

[1] But *don't* do it or mention it at work. Lots of people are reasonable about it, but some employers will take it the wrong way.

striker's avatar

What a beautiful milestone. A few things to keep in mind:

1. Nobody there knows everything. But even the CEO. All companies are collections of incomplete knowledge and inefficient systems.

2. Always do two things:

(a) your core job: what’s on your job description or your contract. You will be asked to do a multitude of things outside of that, and you’ll do them (because you want to be a good colleague, and because you want to learn, and because you don’t yet know what you’re going to be good at) but don’t let all those other tasks distract you from getting your core job done.

(b) pay attention to where you can add value over and above your core job. You’ll get some credit for doing exactly what you were hired to do, but you’ll get more for seeing things you weren’t hired to do and doing them too.

3. Yes, it’s scary, because you’re going to get paid next month and the month after that and the month after that, and you’ll be afraid that you’re not earning that money, but know that every day you’re learning things (even things that seem tiny and insignificant) that are valuable to your current employer – and especially valuable to another employer further down the road.

4. Jobs, gigs, projects, opportunities come and go all the time. You don’t know where the next one is going to come from, but I promise you it’s going to come.

5. You will face situations that you hate, that you aren’t sure you can manage, and you will wonder how you’re going to get past them. And then they’ll be gone, and something else will replace them, and everyone (including you) will forget about the thing that seemed insurmountable.

Good luck!

Shankar Sivarajan's avatar

Don't post how you spend your day on social media.

Don P.'s avatar

Just remember that everybody else there also had a first day, once.

Brad's avatar

Just remember that the vast majority of people are not actually as good at what they do as they originally seem. Take pride in fully comprehending your responsibilities, take ownership of them, *just do things* instead of timidly wondering “what if?”, and you’ll be better than 75% of workers. Once you realize you are fully capable of doing nearly anything at all with a good ol’ college try you’ll start doing even more, and then you’ll be better than 95% of workers.

Samuel Potter's avatar

I'm in a similar boat; today was my first day, actually. I think the biggest idea to keep in mind is the fact that you were hired by your employer. They know that you're new to the game, they've factored your inexperience into their calculations, and they still decided it's worth it to hire you, because you'll learn and grow as they show you what your job consists of and what they want from you. Figure out the actual boundaries you have at work with your supervisor and your team, find out what your best resources to learn are (people or otherwise), do your best with what they give you, try not to delete the repo and all the backups (or other mistakes of that magnitude), and you should be fine.

Dragor's avatar

Sleep Sheep Review:

I signed up for Sleep Sheep the day Scott posted the link to an open thread. I was pretty desperate, feeling dysphoria from a number of particularly terrible nights’ sleep. What follows is my review as best I can craft it with a brain that is rationing sleep opportunity time.

The best thing about Sleep Sheep is that I’m doing it. I’d known about CBT-I for some time. I’d known it was the gold standard, that it involved rationing sleep, and I’d vaguely even planned August-October as the time to do it since my wife is on a series of vacations, and it’s easier for me to tinker with lifestyle choices while she’s away. I’d nonetheless remained fairly horrified at the prospect of rationing sleep given how horrible sleep deprivation has been to my life, and I’d been reluctant to pull the trigger.

Turns out horror like mine is common, and it feeds into the insomnia cycle. Awoken at night for whatever reason and impatient to fall asleep, I fixated on untold horrors that may happen the next day due to my inability to fall asleep. These foretellings compounded the problem. I’d noticed this connection, and begun to notice certain days turning out pretty ok, even excellent sometimes. I’d even begun debugging my emotional response, but this process was iterative and piecemeal, one enterprise within a life of many without a coherent plan and often set aside where other demands become urgent.

What I appreciate about Sleep-Sheep was it provided a simple, socially grounded structure to keep me persistently working towards consistent, quality sleep: log my sleep, follow the advice of an AI sheep, and meet once a week with my sleep coach. I stated at the beginning I would make a good faith effort to work with the program, and I would have felt embarrassed going into a weekly meeting with Luomei having shirked my objectively not that hard responsibilities. Meanwhile, the prospect of asking for a refund if the program failed because I hadn’t done the bare minimum felt horrifying. Squeezed between twin shame driven imperatives, I charged forward into the sleep deprived abyss.

And the abyss was… not that bad? Day by day, I muddled through, generating disconfirming evidence to the catastrophic foretellings disrupting my sleep in the middle of the night. My sleep wasn’t great, but it offered predictability. I could mostly plan a life around it. Going through the first couple weeks was difficult, but there was a frontier of incremental progress. In fitness, meditation, and clinical practice I have grown to love the philosophy of progressive overload, so the notion that a brighter future built on tolerance of present suffering slotted into a well developed philosophical infrastructure.

There is an archetype to someone who does not need the structure and support Sleep Sheep provides. A friend from practicum noticed a few months back she was having difficulty sleeping, implemented a CBT-I protocol, and moved on to living her best life. Said friend also had a successful career as a military dentist, recovered from a horrible muscular injury, and balanced a mildly rigorous MFT program alongside fulltime work. She makes frequent use of spreadsheets.

I enjoy thinking of myself as made from that mold, but past experience demonstrates I am not, and I prefer the consequences of realistic planning. So I pay $300 a month, Luomei gives inspiring pep talks once a week, and an AI sheep gives me psychoeducation as we structure my CBT-I program. Sometimes I backburner sleep improvement, and I don’t worry about it because Sleep Sheep will get me back on track in, at most, a weeks’ time. I am experiencing incremental improvements in a life domain that has strained my marriage and rootbound my career.

I think many people are more like me than my high conscientiousness friend. We prefer seeing ourselves as able to do the simple thing, yet, empirically, we don’t. It feels stupid to pay an exorbitant sum to a specialist to hold our hands, so we starve valuable self improvement avenues of investment—not because they lack expected value but because the pathway to that value is aesthetically unappealing. Paid help can bypass anticipatory anhedonia, allowing better lives of more frequent exercise, emotional regulation, and, in my case, better sleep. But paying for “simple” feels weird and shameful, so we don’t. For people like that who have difficulty sleeping, I recommend Sleep Sheep.

Josh's avatar

A joke, a joke.

Josh's avatar

This is encouraging on many levels, not just sleep. - “A brighter future built on tolerance of present suffering “

Dragor's avatar

Dude, I so wanted to loop general argument for using paid services to overcome the anticipatory anhedonia that impedes progressive overload across many life domains, but, alas, I couldn't fit it in.

Josh's avatar

That’s a lot to to stuff into a mattress review 🙂

Dragor's avatar

Oh, no, this is the CBT-I app scott linked a month back: https://www.gnsheep.com

Shankar Sivarajan's avatar

Trump's tactic of suing companies for damages, and then settling for a few tens of millions of dollars – YouTube just settled for $25M today – seems to be a very effective way of legally accepting bribes out in the open. This seems innovative enough I wonder if this is going to become a standard strategy, both in the US and worldwide; I think most countries have a legal mechanism for settling suits out of court.

Wanda Tinasky's avatar

No because it's too hard to execute the litigable action with plausible deniability. These suits are all downstream of things that happened on Jan 6. That's too indirect to be useful. No company is going to initiate a bribe by attacking a politician who has a 20% chance of being elected again in 4 years. Cool idea though.

Charles Krug's avatar

Unless the settlement is in the form a a suitcase full of cash or a chest of Spanish bullion or a canvas bag clearly labeled "SWAG" . . . . personally delivered to Donald Trump, it's not bribery.

It's a tax that gets plopped into the general fund and squandered on Something Useless (tm) however you personally define "Useless."

Naturally I only support Federal programs that Any Reasonable Man considers "Essential," but All Those Other People constantly advocate for useless and wasteful spending.

Sad thing is that it's an irrelevant amount of money to both Alphabet And the US Treasury.

Shankar Sivarajan's avatar

YouTube's settlement seems earmarked towards building the White House Ballroom, not the general fund of the US Treasury.

Charles Krug's avatar

And I missed the fact that it’s a Personal lawsuit vs “Donald Trump” rather than vs “President Trump.”

Tamar's avatar

Look into SLAPP suits and anti-SLAPP laws

Shankar Sivarajan's avatar

You might have missed the point. I agree that the companies he's suing would win in court, and the expense of the trial isn't significant to them. Neither is the money they're settling for: this is a way to legitimize a mutually beneficial transaction.

Michael Watts's avatar

> Trump's tactic of suing companies for damages, and then settling for a few tens of millions of dollars – YouTube just settled for $25M today – seems to be a very effective way of legally accepting bribes out in the open. This seems innovative enough I wonder if this is going to become a standard strategy,

I'm not sure it is innovative. Just as the Texas anti-abortion law was described as "innovative" for copying the longstanding method that civil rights laws used to ignore the first amendment, this approach sounds a lot like the longstanding agency method of soliciting a lawsuit from an ideologically aligned group and then settling the lawsuit with an agreement to do something that they wouldn't have had the power to do if it hadn't been part of a legal settlement.

Dragor's avatar

This sounds pretty interesting. Can you elaborate with examples?

agrajagagain's avatar

I expect that a lot of the effectiveness comes from everyone still treating Trump as an aberration. A one-time payment of $25 M is chump change to a company the size of YouTube, and more than worth the cost if it's all they need to do (even if it's multiple times) to weather the storm and wait for sanity to return.

If it started to look like other government officials were going to follow suit, suddenly companies would be incentivized to fight much harder. They keep armies of lawyers on retainer anyway. In a world where non-Trump officials feel free to do the same, fighting some drawn-out legal battles is enormously preferable to signalling that you're open to paying Danegeld to anyone with the power to ask for it.

Shankar Sivarajan's avatar

I agree it's a trifling amount for any of these companies. I think this is "protection money" in a sense more real than simply as a euphemism for extortion: I think the implicit agreement is that Trump, in addition to not attacking them, keeps others with the power to hurt them – I'm thinking primarily of federal regulatory agencies here, but should also include non-Trump officials – from doing so.

User's avatar
Comment removed
Sep 30
Comment removed
beleester's avatar

"Caught" implies a crime was committed, and I don't understand what crime Youtube could have committed by banning the President's account. Surely Trump isn't claiming that the government should compel Youtube to carry his speech, right?

Shankar Sivarajan's avatar

You might be missing the wood for the trees: that explanation seems narrowly tailored for YouTube and you'd need a different one for the other companies he's doing this to/with.

User's avatar
Comment removed
Sep 30
Comment removed
Shankar Sivarajan's avatar

> you'd have to actually discuss them

When I said you'd need a different explanation for each company, I wasn't asking you to come up with them. On the contrary, I meant that you needed a single explanation that works for them all without needing a list.

Jack's avatar

A lot depends on the response. If Dems win in 2029 and investigate each such case for bribery, then it will probably be shut down. If not then it just becomes an accepted tactic for anyone brazen enough to do it.

Supreme court has made it harder both by narrowing the definition of bribery, and saying the president is immune from prosecution (the obvious tactic would be to tell these CEOs that you'll throw the book at them unless they flip and testify against the president, who also might pardon them). There might be some legal hardball a future administration could play to get around it, but I don't know if they would.

Eremolalos's avatar

Guy's a cornucopia of ideas of that kind.

Shankar Sivarajan's avatar

Deserving of the Nobel (Memorial) Prize in Economics.

RyanBTrue's avatar

I'm a 33 year old man that is exactly 5 feet tall. I'm thinking about getting on a dating app because, realistically, that's the best way that I'll be able to meet people to date. I'm dreading it though because I've heard that it's brutal for short guys, and I'm way on the end of the spectrum. I vaguely remember some study that was done where upwards of 95% of women wouldn't date someone my height. I don't even want to google to find the study because I remember all the info I could find on it before being pretty bleak.

I'm worried that a dating app will destroy the meager amount of self-esteem that I do have right now. How should I prepare myself for this? Am I overly worried?

Neurology For You's avatar

I can’t give you any specific advice, but I will say approach it in the spirit of experimentation, keep notes, and optimize like hell.

Paul Brinkley's avatar

End of the day: most women are attracted to whatever says "successful mate" to them, which probably can mean "physically capable", and by extension, "tall", but also means a great many other things, which can easily compensate for "not tall". Since you can't help not-tall, make sure the other things work.

Christina's advice about looking like an expert on something is sound. The more, the better, so either be good at lots of things or _really_ good at a few. So is her advice on looking like you care about how you look, which I take to really mean that you should care about taking care of yourself physically. So, chances are, you can look healthy (exercise + diet). This demonstrates self-discipline; everyone likes someone who's self-disciplined.

If you're concerned enough about self-esteem to bring it up, then that might be your first challenge. You're probably likely to get a lot of improvement if you can express confidence in yourself. If you're there, you're in really good shape.

I'm reminded of a fandom convention I went to years ago. That evening, I'm walking along, and I notice a crowd of guys trying to talk up a chum of theirs who'd just turned 25 and was still single. I keep going (to the men's room), wash up, head back out, they're still trying to boost him, telling him not to be worried. I lean slightly to the side as I pass by and say, "they're right you know". Then I turn around, thumbs at myself, "40", and give my best "I'm not worried either" pose. One of the boosters yells out "you are my hero!!". I grin, salute Mr. 25, and keep walking. They didn't have to know that the woman I was crushing on for several months mentioned that day that she was interested in another guy.

Today, I live with my GF now, steady as a rock.

There are _some_ women who like a guy with self-esteem issues (it comes off as vulnerability, and they're looking for someone they can nurse), but confidence probably puts you in a larger pool. Plus, it's probably healthier independent of whether you meet a nice gal.

Christina the StoryGirl's avatar

Make sure some of your photos on the app show you expertly doing *something.*

Most women who are smart enough to be meaningfully attracted to someone smart enough to be an ACX reader/commenter are going to be profoundly attracted to the confidence which comes with competence in some skill that matters to others.

So if you play a musical instrument, double down on practice *and performance.* If you cook, get better at plating. Play a sport? Make sure it looks like you love it. If you're interested in topping in kink, devote yourself to learning rope bondage, especially rigging, and go to lots of public workshops (I was friends with a 5'2" older guy who developed a really amazing skillset in electrical play, and he had many kinky women following him around despite looking NOTHING like a conventional porn star). Etc.

The exception to this advice about highlighting a skill is video gaming. While there may be some women who deeply admire expert gaming, in reality, no there aren't.

You're a guy, with presumably normal male biological impulses, so you may be tempted to dismiss this advice because you're used to human attraction being ovwrwhelmingly visually based. Please try to ignore your own experience and take this on faith: Women are indeed different than men when it comes to attraction. For the smarter ones, expertise is as attractive as a long body and square jaw. Formula 1 drivers are small men who don't lack for women's attention, as are gymnasts and jockeys and musicians and and and...

You don't need to be famous, of course.

You just need to be objectively good enough at something to legitimately earn the confidence of being good at something.

Women will notice if you show that to them.

Erica Rall's avatar

>The exception to this advice about highlighting a skill is video gaming.

I would also recommend against a profile picture showing you fishing. "Man posing with fish" profile pics seem to be enough of a cliche to inspire eye-rolling among women of my acquaintance who use dating apps. My social circles (which lean very nerdy and very blue-tribe) are probably culturally less likely to consider fishing an appealing hobby, but even among women who do find fishermen alluring, I expect the market is saturated.

Christina the StoryGirl's avatar

Yeah, agreed, fishing may not be a great look unless it's, like, a marlin or tuna or something that required tremendous skill.

And even then, maybe not.

Tyrone Slothrop's avatar

I dunno Christina, this guy was married 4 times. Fish catch pix didn’t hurt him a bit with the ladies.

He did die in a place called Ketchum though so they might have been a bad omen.

https://1.bp.blogspot.com/-eNEDkuAszfg/YPhlemQiNLI/AAAAAAAEKHE/O9MQGRQp84UtsUgzeIK-CsT3DdiBjA_fQCLcBGAsYHQ/s1000/ernest-hemingway-big-fish-3.jpg

It’s a little known fact that the postal service didn’t actually have any rules against their employees dating women, it just worked out that way for me.

gdanning's avatar

If it makes you feel any better, Robert Reich is 4'11." But he seems to have a sense of humor about his height (whether he did at your age, I don't know). So, maybe make light of it in your profile?

Pjohn's avatar

I completely understand that it hurts to be rejected/not-selected regardless of the reason: if you can develop the mental habit of only "counting" rejections for appropriate reasons (difficult, I know!) this might make it less distressing and easier to bear?

For example, if some woman rejects you but she's the sort of person you wouldn't want to date anyway, that's a great example of the sort of rejection that shouldn't count (if anything, this should count as a bonus - this woman you wouldn't want to date is doing the work -of removing herself from your dating pool- for you!)

If this system makes sense to you, I suggest that "I'm the sort of person who filters based on height rather than on personality, shared interests, even on looks/vibe more generally" would be an excellent candidate for the sort of person you probably wouldn't want to date anyway?

If the average match rate for a 6' guy is 1 per week (made this number up out of thin air as I've no idea really! The general point should hopefully be the same if you plug in different numbers) and the average match rate for a 5' guy is 95% lower (but it's cool: this 95% is largely made-up of women whom you mostly wouldn't want to date anyway!) then your match rate would be 1 per 5 months:

A) This isn't to be sniffed-at! Far more than if you don't join in the first place; if the first match turns out to be perfect that's a total of 5 months' wait to meet your soulmate which seems like a great deal; if the first match isn't perfect this is still good and healthy, see Scott's writing on micromarriges for more info: https://www.astralcodexten.com/p/theres-a-time-for-everyone

B) If you get more than 1 match every 5 months - you're doing better than your expected baseline, your other qualities are so attractive that *even some of the "I don't match with short guys" women are matching with you*(!), and you should actually feel very good about yourself!

Hope this helps - and good luck!

blorbo's avatar

If someone would reject you for your height, you would not enjoy dating them.

Nancy Lebovitz's avatar

https://www.youtube.com/watch?v=OgIBuOiigSo

Jordan Harbinger, who is moderately short ("5' 10" with shoes on") says that women don't care about height as much as some say they do.

Women who are with a man shorter than they are will lose the ability to see the height difference. Women chose men who make them feel good.

Or maybe you could say that they care about perceived height, and will perceive what they need to perceive.

I haven't confirmed this, but I do find it interesting.

Justout's avatar

I struggle to understand how you can describe someone as "moderately short" when they are above the average male height in the US (5' 9"). If I were the OP I would be unable to relate that comment to my situation and indeed would probably find it quite offensive.

Neurology For You's avatar

It’s the distorted world of the Internet, where leading ladies are “mid” and normal guys are short because everyone’s reference points are online celebrities or even AI generated.

Eremolalos's avatar

I know a man who’s your height. Has been happily married for a couple decades to a petite, very pretty, Asian woman.

Alexander Turok's avatar

You might benefit from telling yourself that self-esteem is a decadent concept that is eroding the foundations of Western civilization - it does seem to be holding you back.

I would advise you to treat your situation like an emergency and sign up ASAP. Take time off work if necessary. Sign up for as many as possible and ask yourself if you're willing to date fat women/single mothers/etc. There are even dating apps especially for fat people.

SP's avatar

What's your race? Ngl, it will be pretty brutal so be mentally prepared. Prepare for the worst, and even hope for the worst honestly. It sucks but you have to play with the cards you have been dealt with unfortunately.

RyanBTrue's avatar

I'm white. The worst case scenario is getting no matches and a small number of mean spirited messages. I'm thinking that cruel messages aren't super likely but no matches is realistic.

SP's avatar
Sep 30Edited

That's pretty good advantage to have in the dating market. You have an ok chance with East/Southeast Asian immigrant women I'd say. Get premium version of 2 dating apps at a time and swipe somewhat selectively for a few months. Watch out for scammers, you may make mistakes here and there at first(asking out too soon, asking out not soon enough haha) but with time you should get the hang of it. Have a decent bio(job if its something that pays well + 2-3 hobbies). Look online into examples of bios and fine tune yours accordingly. 3-4 good photos(no sun glasses, with different clothes at different locations). Also remember most guys get very few matches to begin with, with you it will be even slower, but its a grind, and if it works out it will be worth it. Don't spend more than 5 minutes on dating apps per day. Be disciplined about that. If after a few months, you feel like its hitting your self-esteem, just delete the apps for a few months and focus on something else. Come back to it when you are ready,

Christina the StoryGirl's avatar

Seconding some of the advice on the photos, and I'm going to escalate with advising looking into professional portrait photography. I wouldn't go so far as to do a formal corporate or actor-style headshots, but rather "candid" (those quotes are doing a lot of heavy lifting) portraiture by a real expert who knows how to make a person's face a story.

Everyone with a phone thinks they're a photographer, but someone good enough to charge money will catch your face looking compelling, even if you're "ugly."

SP's avatar

I don't think professional portrait is going to work. It comes across as trying too hard and again girls see through these things. His face is not the issue either. He is probably attractive enough, its just the height thats the issue, which professional photography won't help. Professional photography will help if you have an ugly face but average height. But again no harm in trying different things I suppose.

Christina the StoryGirl's avatar

Hey, you know what "girls" like?

Men who care *enough* about what they look like, which is to say, care at *all.* This covers the pretty basic stuff of having good hygiene and basic grooming, dressing in clean, well-maintained clothing which fits their body, and so on.

A photo which communicates "I care enough about making a good impression to share this excellent photo" conveys all of the above, plus more.

Most women (not "girls," the OP is 33!) are not going to care where a compelling photo of a man came from. A "candid" professional photo of the OP flambeing a crepe or etc can be explained as a "candid" shot someone took during a dinner party.

It's not that hard.

User's avatar
Comment removed
Sep 30
Comment removed
Christina the StoryGirl's avatar

I mean, yeah, look at the photographer's portfolio. If they aren't good, don't hire them.

I'm more saying that photos by an established pro portrait photographer are *far* more likely to be appealing than random selfies.

Dragor's avatar

I'm pretty skeptical of the claim that apps are the best way to meet people to date. I don't have skin in the game, so my opinions are admittedly perhaps armchair, but maximizing face-to-face interaction especially in disproportionately female spaces always made more sense to me for the meeting people part.

SP's avatar

That works if you are already attractive or have insane charisma. There's a limit to how many people you can meet in real life. But with dating apps, if you live in a large city your pool is in hundreds of thousands. I am quite ugly guy and the few dates I have been through have only been possible due to dating apps. If I lived in the 90s where I had to meet girls in clubs or something, I would have been cooked haha.

Dragor's avatar

Damn. If you're right I've been giving people terrible advice. Nonetheless, I think charisma is pretty learnable? Like, I suspect that if someone read a book on therapeutic microskills, and did loving kindness meditation for somewhere between fifteen and sixty minutes a day they'd have the basis for iterative improvement? I am myself fairly charismatic and reasonably attractive though, so I always wonder if I'm just full of shit on this subject.

Lucas Campbell's avatar

What book(s) specifically do you recommend for improving charisma?

Dragor's avatar

That is a very fair question. Although it would behoove me to have an answer given how strong my opinions on this subject are, I have never actually assembled a curriculum or anything. If you dm me I'll give you my number, and I'd be happy to chat about the subject especially consider everything below is pretty ad hoc.

Back to your request. think my suggestion would be some mix of microskills books, meditation books, and therapy books. Your goal here is to be able to feel ease and love so that you can express that reaction to others, sincerely enjoying their company. For the microskills book I read Intentional Interviewing and Counseling by Ivey and Ivey. It's not, like, great, and if someone knew a good book on therapeutic microskills I'd be really interested, but it covers encouragers, reflecting, and and summarizing which are really what you want.

For meditation books, what you're going for is loving kindness. If you can cultivate an intense experience of love, you'll get the ability to convey it to others, which will make you really enjoyable to be around. I really like I'm Right You're Wrong by Ajahn Amaro and Broad View Boundless Heart by Ajahn Amaro and Ajahn Pasanno. The authors are Buddhist monks, so if you prefer something secular I think Judson Brewer has guided meditations, and Sam Harris might cover loving kindness in Waking Up?

What therapy books are good kinda depend your personal taste and what's difficult for you. Generally, what you're going for is excellent self esteem, non anxiety, and communication skills. Grading by enjoyability of writing and and usefulness of content, Judson Brewer and David Burns both wrote good books. The New Peoplemaking is also great if you have difficulty with your family of origin.

John Schilling's avatar

"How to Win Friends and Influence People" is trite, cliched, and pretty much what it says on the label. Not a bad place to start, at least.

Dragor's avatar

I, uh, really should read that.

SP's avatar

You can learn enough charisma to get on with day to day life(work, school etc.) but you are not going to be charismatic enough to pull girls while being conventionally not attractive. That kind of charisma is natural. Girls are not dumb either. They are insanely perceptive and can sense you are just following the steps from the How to be Charismatic guide. There will always be a small number of girls who like a specific type of guy(short, autistic, ugly, goofy, awkward) whatever. They themselves might not fit into any of the above categories but for whatever reason they like them. Our best shot is to cast a wide net on the dating apps and try to find them. But the process will be absolutely brutal and there's always a good chance it might not even work out. But again life ain't fair.

Dragor's avatar

Hmm. I think there's truth to what you're saying. Like, I have a friend who seems to work from a playbook and sometimes he comes across false. Meanwhile, my experience has been mediated by being conventionally at least mildly attractive. Still, for me across the arc of my life, learning charisma has _felt_ like a skill. Like, I have over time a more coherent notion of how to befriend someone, or improve someone's day, or ask a store employee to do me a favor. My friend who read the books too, he genuinely can carry a conversation with a stranger better than most.

Do you apply your suggestion to middling attractive people? I haven't actually advised anyone who is _un_attractive. But a lot of people who have strengths seem to obsess over their deficits. While not contradicting anything you said, I remain confident a big chunk of attractiveness/charisma is intrapersonal and learnable.

SP's avatar

Honestly no harm in trying. It's cliche but everyone's different. There's a million subtle different things about us physically and behaviorally which is perceived subtly differently by everyone else as well.

I will take back what I said before, and advice anyone reading this to just try different things out. See what works and on whom? Maybe the steps from the guide to Charisma might just work out for you. Even if it doesn't work on girl 1, it might work on girl 2, who knows? But you still will be interacting with people and that alone goes a long ways towards improving your social skills.

User's avatar
Comment removed
Sep 30
Comment removed
Dragor's avatar

Yeah, I think I imagine human relationships as being fairly, well, Pavlovian. Like, if you interact with someone and bring them delight, it's easy to imagine wanting to interact with you more. If you sincerely enjoy the exchange, it's much easier to put forward signs of interest since it's very low stakes to refuse you (you enjoy them regardless). Having female friends is a good way to get introduced to eligible women, and statistically some quantity of people are in fact single and interested. I suppose my model supposes that it is possible to learn to feel and spread joy, which not everyone agrees with.

Per your points though, it's super great to do stuff with people because as you say it's a fun way to connect—plus if there isn't romantic compatibility, you still had fun hiking or whatever!

Ebrima Lelisa's avatar

Bruh alright. As someone else said if you put your height up front hopefully nobody will match with you and send hurtful messages.

That being said there is a real grind and soul-sucking aspect of it. For guys of average height it's already madness. Even I thought I was desensitized but the constant bs just hurts.

That's your greatest risk. The grind. If you can withstand that then maybe you'll make it out.

Shaked Koplewitz's avatar

I'll note that this rate varies wildly between apps.

Erica Rall's avatar

I'm skeptical of the 95% figure. I haven't heard it before, and when I went looking for it just now the search results sound like internet folklore based vaguely on some survey that someone once did about how many women are interested in dating men who are shorter than they are. The impression I get is that among straight+bi women, a small but vocal minority have a strong preference for dating taller men, most have a mild preference for taller men but it isn't a dealbreaker if you're attractive to them in other respects, a nontrivial fraction don't care at all, and a few probably have an active preference for shorter men.

Even if the 95% figure is accurate, five percent of the millions of women using Tinder or whatever is still a rather large number of people in absolute terms. Some fraction of these won't be interested in you for other reasons, or you won't be interested in them, or both, but you only need to find one good match for the search to be worthwhile.

Performative Bafflement's avatar

> I'm skeptical of the 95% figure.

It's a pretty good starting point, because women generally only swipe on ~5% of men to begin with, so you're necessarily going to be addressing a small pool:

This is from Tinder data, affirming the 5% figure

https://imgur.com/H5oXiUZ

Formerly, in the age of actual websites, it was about ~20%, and declined with the rise of swiping apps / Match Group.

The following two are from Golden-age OKC data and show the male attractiveness rating and drop offs:

How men and women respectively rate each other on golden age dating website Okc:

https://imgur.com/a/gp2FZJy

Female "likes" by male attractiveness

https://imgur.com/SvVmu1F

Messages by male / female attractiveness

https://imgur.com/IpHyymX

And a direct height data point "is much shorter than you" by college / non-college women:

https://imgur.com/wN4v0Mj

And percent of women setting height filter on Bumble:

https://imgur.com/a/gl6Kt0M

Erica Rall's avatar

I don't think the overall swipe rate is a good proxy for height filtering, since women make that decision for any number of reasons. Some of those reasons are presumably highly correlated between different women (conventional attractiveness, decently-written profile, etc), while others are less correlated (cultural signifiers, shared interests mentioned in profile, reminds her of an obnoxious ex, etc).

The "is much shorter than you" chart is consistent with my expectation that a many women prefer taller man to varying extents, but many care little or not at all about height, and it's one factor among many for at least some of the women who do care about it. Note that by that chart, 36% of college-educated women and 52% of non-college-educated women self-report not caring about men being much shorter than they are.

I'm not 100% sure how to read the Bumble chart without more context. Based on the range of the Y axis, I suspect it's saying that among women **who set height filters**, Y% of those filters include men who are X height. If that's what it means, then it doesn't really tell us how many women set height filters at all. It does tell us that an awful lot of women who do care about height aren't interested in dating professional basketball players or men who have to duck when going through doorways. It also tells us that minimum height filters are set at a variety of levels, many to only include conspicuously tall men, some to exclude men of below-average height, and some to exclude men who are at what is presumably the filterer's own height give or take a few inches.

Justout's avatar

This sounds right to me. I'm a tall woman, happily married for almost 30 years to a man much shorter than I am, and I could not give less of a damn about his height / our height difference. He makes me feel awesome and is truly my better half. And, as my grandma once said "they're all the same height when they're lying down" ;)

RyanBTrue's avatar

This made me feel a lot better. Thank you!

Erica Rall's avatar

Glad I could help!

Brad's avatar

“Even if the 95% figure is accurate, five percent of the millions of women using Tinder or whatever is still a rather large number of people in absolute terms.“

Exactly! It’s a numbers & time game, and one has to throw their bait in the water where the fish are and wait. Just cuz a fish bites and you don’t land it doesn’t mean a fish won’t come along later!

Shankar Sivarajan's avatar

If it makes you feel better, I estimate that if you're otherwise about average, upwards of 95% of women you "swipe right" on aren't going to date you anyway.

RyanBTrue's avatar

Hmmm that doesn't make me feel better at all lol

Brad's avatar

Just put your height on the app so it’s visible. Anyone who matches/chats with you will be aware of your height and is unlikely to be a dick. I thinks it’s probably a great way to meet girls that are interested!

RyanBTrue's avatar

I'm definitely going to be upfront about it. I think one of the pictures I'll upload will make it obvious that I'm smaller while (hopefully) still being flattering.

Brad's avatar

Good luck! And remember… you have a pretty awesome camera in your pocket than can provide solid photos. I made a homemade little camera stand for my phone to take some good pictures of myself and it made a noticeable difference for the amount of likes I get. As guys we don't often get good pictures of ourselves taken out in the wild… it’s not necessarily “cool” to set-up your own little photo studio for your dating profile but if it can be worth it if they’re staged well.

Testname's avatar

At least on Hinge, you don’t have the option to leave your height off

objectivetruth's avatar

The book "Homo Carnivorus" settles the "is meat healthy or not?" question for me. It makes a strong and substantiated case for why meat is (probably) healthy. If anyone cares about that, I would recommend you read that book. It basically got me to stop wondering and worrying whether what I eat is beneficial to my health or slowly killing me.

Aiden Gindin's avatar

If you're on the fence about attending a meetup: I strongly recommend going! I'd never been to an ACX meetup before, and I was worried I wouldn't fit in or it would be awkward, but everyone was super welcoming and I had a great time.

TotallyHuman's avatar

I have seen several blogs whose authors do not use any capital letters. This is clearly a deliberate choice, as the authors are skilled in writing. Does anyone know what the meaning / intent behind this decision is?

Neurology For You's avatar

It’s the equivalent of calling yourself a smol bean.

Autumn Gale's avatar

I've heard it called 'lapslock' and to people from certain social media spaces it reads as an informal, casual tone, a bit laconic. Having spent time in places where this and other idiosyncratic punctuation is common, I do indeed get cues about emotional tone from where a poster uses or omits punctuation, and it can be used for humorous purposes. Think "What?" versus "WHAT" versus "what."

Reading long form text in lapslock is rather obnoxious though.

Tyrone Slothrop's avatar

It’s a little-known fact that while there are a very large number of capital letters out there, their numbers are finite. With their unprecedented and indiscriminate use in Truth Social posts, their TFR has fallen below replacement levels.

I suspect that these writers are concerned citizens drawing a lesson from the decimation of the vast herds of American Buffalo and are simply trying to preserve them from extinction.

EngineOfCreation's avatar

TYPICAL "PEAK CAPITAL" PROPAGANDA. IF CAPITAL LETTERS WERE TRULY RUNNING OUT, WHY HASN'T THE COST OF PRODUCING THEM INCREASED? WE KEEP FINDING NEW RESERVOIRS ALL THE TIME, AND MOST COUNTRIES HOLD VAST STRATEGIC RESERVES TOO. I, FOR ONE, GREW UP WITH A DIZZYING ARRAY OF CAPITAL LETTERS, REFINED INTO ALL SORTS OF FONTS AND ITALICS, AND TO STOP USING THEM OUT OF SOME MISGUIDED SENSE OF DO-GOODERY WOULD BE TO ROB OUR KIDS OF THEIR FUTURE! I'LL BE ROLLING CAPITALS FOREVER, JUST TO SHOW YOU!

Paul Brinkley's avatar

thank you for your attention to this matter!

Amos Wollen's avatar

Im not totally sure why it originated, but I think it’s nearly always a feature of girlblogging on tumblr, or that stems for a tradition of girlblogging that began on tumblr

Michael Watts's avatar

I have only heard of it, in a modern context, as being an annoying, stupid thing that Sam Altman does.

So it's definitely not just a tradition of girlblogging from tumblr.

Nikolai Vladivostok's avatar

I don't know what it means, but I can't read it and I wish they'd stop.

Erica Rall's avatar

I can think of a few possibilities:

1. Author is a fan of e.e. cummings

2. Author is an eagle typing with the talons of one foot while perching with the other foot

3. Author's shift key and caps lock key are both broken.

4. Author is typing on a phone and finds the extra tap required to make a capital letter cumbersome

5. Author is of an age bracket where text/chat speak is the standard casual register for written communication.

6. If all-caps are shouting, then all-lower-case is whispering, and the author wants to use this to create a sense of intimacy and sharing secrets with the reader.

Of these, 5 seems the most likely, followed by 1 and 4.

Michael Watts's avatar

> 4. Author is typing on a phone and finds the extra tap required to make a capital letter cumbersome

They would have had to first disable the feature that automatically capitalizes the first thing you type.

thefance's avatar

I think it's supposed to exude a feeling of informality and intimacy, as if you're just exchanging text messages, instead of reading hoity-toity long-form essays in the New Yorker.

Eremolalos's avatar

It's an easy way to be different without giving offense?

Spencer's avatar

I'm pretty young, and my wife and some friends (especially girls) do this. I think it's an attempt to be demure and understated.

Ari's avatar

Can you give an example?

My guess is that it makes it feel more stream-of-thought as opposed to polished, so that you see the writing as a look into the author's mind rather than as text that stands on its own.

Alexander Kaplan's avatar

One example is here: https://fatherkarine.substack.com/p/ugly-girl-manifesto

It's not really my thing, but this person is one of the funniest writers I've discovered on Substack, so I ignore it. I'd also like to chime in with TotallyHuman and say I'm not sure what the actual effect is supposed to be: your guess seems as good as any.

Dragor's avatar

That woman is really funny, yeah.

Peter Defeel's avatar

It strikes me, reading that, that we don’t really need capitals. We do need quotation marks though - there’s a tendency for modern authors to drop that, it will get old fast.

Michael Watts's avatar

> We do need quotation marks though - there’s a tendency for modern authors to drop that, it will get old fast.

I understand that Old Chinese texts use different verbs for introducing quoted speech. There are no quotation marks, but the distinction is drawn anyway.

You might see a bit of this strategy in English, where "said" might mean anything, but "quoth" can only report quotes.

(Although browsing through the wiktionary citations, "quoth" appears to be able to report indirect speech before the 20th century.)

Viliam's avatar

Hello all bloggers and wannabe bloggers in the "rationality-adjacent" sphere!

You have probably heard about https://www.inkhaven.blog/ -- and if you have not, you might want to click that link and read it right now. Long story short, there will be a 30-day training for aspiring bloggers, where they can receive some wisdom from their more experienced colleagues (including Scott Alexander), and in turn they are required to post 1 article on each of those 30 days, or be kicked out of the camp. Publish or perish! You need to be actually at that place, for the entire month.

And by the way, the deadline to apply is tomorrow. (Not sure if there is still any place left.)

If you are like me, you are probably complaining about the cruel fate that doesn't let you take a month of vacation exclusively for your hobby. (And if you have the time, you are probably unemployed and don't have the money.) Even if we skipped the part where you have to be there in person, and allowed online participation, there is probably no way you could write 1 article each day. Luckily, there is an alternative.

EDIT:

Read https://www.lesswrong.com/posts/7axYBeo7ai4YozbGa/halfhaven-virtual-blogger-camp for information about the online alternative and how to join it.

It will be half as intense, but twice as long: to produce 30 blog posts within 61 days, during October and November. We will start sooner than the Inkhaven group, and finish at the same time, hopefully with the same output. That means making one post about every two days, approximately; the rules will be much less strict, you won't be kicked out if you don't have anything posted by day 2, but you will be if you don't have anything posted by day 7, because otherwise what's the point.

Languages other than English are allowed; videos are also an acceptable medium; it is not necessary for all 30 posts to be on the same blog; topics are not specified just please don't post anything that would get you banned in an ACX Open Thread. Also, no AI-generated text! If you already have a blog and don't want to ruin it by suddenly writing too much with lower quality, it is okay to create another blog for this purpose. It is okay to publish pseudonymously. There is no reward other than your own feeling of accomplishment, and some peer pressure to perform.

I made a Discord group for this: https://discord.gg/GHkqKHRy

Voloplasy Shershevnichny's avatar

tldr Looking for AI safety events in Bay area in January 2026.

I am a mathematician visiting Stanford University in January 2026. In my spare time I've been trying to work on AI alignment the last couple of years, but feel somewhat isolated. Any suggestions on people to talk to/places to visit while I'm there (events/conferences/bothering-random-researcher activities)? Thank you!

Simon Rubinstein-Salzedo's avatar

I have a math PhD from Stanford, live nearby, and am interested in AI safety. Let's meet up when you're here.

Voloplasy Shershevnichny's avatar

That would be great, thank you! I will send you an email closer to the date - is @eulercircle.com email address I found on your web page a good way to contact you?

Stephen Skolnick's avatar

Hi all,

Re-announcing that we're recruiting for the cholesterol:coprostanol study (https://docs.google.com/forms/d/e/1FAIpQLSf_BXwlEJaGxtQVtOpTzLMgpCmzLbA171izWx0EfSBBAKnvOw/viewform). We'd particularly love participants in the Bay Area, so we can do some real-time testing with the probiotic, to look at engraftment and whether serum cholesterol levels change after the species is introduced—but we'll take people from anywhere. Signup is free and participation is easy; let me know if you have any questions!

George H.'s avatar

So in a previous thread someone suggested, that I could mute people on substack and not see their replies. Well I did this and I'm still seeing replies. Do I need to block them? And then what does mute do?

Thanks

gorst's avatar

yes, try blocking to get rid of their comments. i am confused about this too. I guess "mute" only refers to direct messages and "block" refers to everything.

bell_of_a_tower's avatar

This is kinda an extension of something I wrote as a reply below...

How sure are we that returns to intelligence are linear or better, especially across all/most areas of life? It seems that ASI predictions rest of an assumption that going from (the equivalent of) IQ X to IQ X+10 provides the same or greater benefit regardless of X and regardless of what you're trying to do with it--that a super-smart entity would be super-persuasive and super-capable in all aspects of life they tried to do--ie that intelligence is a general superpower.

Can someone steelman this assumption?

Because my experience is the reverse--that IQ (intelligence generally) is strongly subject to diminishing returns and behaves a whole lot more logistically. You get great returns going from sub-normal (~80 IQ) to normal (~100) and pretty darn good ones going from normal to "genius" (~120 IQ). But even that latter jump is more narrow than the previous ones. An IQ 80 person isn't going to be very persuasive or capable, and is likely to have lots of other "co-morbidities" (very present oriented, limited ability to really consider other people's experiences, etc) relative to an IQ 100 person. But I haven't seen the same level of increase in anything but sheer academic capability (which often doesn't translate into other fields even of relatively intellectual pursuits) from IQ 120-ish people. In fact, I've often seen *regressions*--people who are really smart often struggle to talk meaningfully to "normal" people and fail to connect to how they see things. Which suggests to me that IQ starts losing a lot of its punch the higher you go. And may actually correlate with *reduced* performance in other aspects of life.

I also don't see very many highly-charismatic super-smart people. Are chess geniuses (all very high IQ) good politicians/people-persuaders? Not that I've seen. Are hard-science Nobel Prize people that much more moral or capable than others? Not that I've seen...many of both sets tend to be cranks and pretty incapable outside of their narrow specialty. Even *within* their broader specialty (e.g. physics), a genius at quantum mechanics isn't better than most smart grad students at, say, general relativity--the skillsets and knowledge base are too different. And they're not good at all at, say, organic chemistry.

My background is in academia (PhD in Computational Quantum Chemistry), but I've also served as a missionary in Eastern Europe, worked with lots of uneducated people as a teacher and with community service, and am currently a programmer at a non-elite smaller company.

Nancy Lebovitz's avatar

https://chillphysicsenjoyer.substack.com/p/youre-a-slow-thinker-now-what/comments

A slow thinker considers how important his lack of speed is, and concludes that slow thinkers and fast thinkers do about equally well in life, and he's arranged his life to have room for him to take the time he needs.

I wonder whether IQ tests select for fast thinkers and might miss out on some slow but good thinkers.

bell_of_a_tower's avatar

Anecdotal, but when I was tested for the gifted program in 5th grade, my IQ test came back abnormal because I spent so long ensuring correctness on a few sections that it timed out. So it looked really low on some sections and high (not genius) on others.

So I can relate. But then I've also seen that smart people can usually get to an answer faster, so... Not sure.

I've often thought that my particular brand of intelligence (which isn't top tier by any measure, but was enough that I only had to start trying in grad school and I passed the preliminary exam on the first try, which is rare at that school especially since I didn't study at all for it) is more about making connections between things I know and interpolating from a wide base of knowledge than about raw horsepower or creative spark. Generalist vs specialist intelligence, maybe?

Tangled plumbline's avatar

I'm not inclined to grant your premise. The most intelligent people I've met or worked with (thinking world-class here) have *without exception* been the people who are best at "being people" -- they likeable, wise, trusted (and trustworthy), have lots of friends, lots of admirers, if a problem (in any field) comes up, then they are the people who *other* people turn to first.

I have little doubt (but see below) that if society were such that reproductive success were primarily determined by that kind of "popularity" (influence or wisdom might be better words), instead of incompetence-at-using-contraception (;-), then they would be the most successful breeders.

One counter-point, though, I've seen more (clinical) depression among them than in wider society. I suspect that to some extent survival as a human depends on being profoundly mistaken about the some aspects of the world -- they most intelligent just don't seem to be that good at being wrong.

Some examples (admittedly, not people I've *met*), Turing (he of the machine) was widely liked (or loved -- non-sexually) was a great uncle, very practical, and an excellent athlete; as I understand it von Neumann had many friends and could not sanely be described as impractical; Wittgenstein (my favourite genius) had many admirers and was much liked (unfortunately poor Ludwig himself wasn't among them), he was working on jet engine design before he switched to philosophy (I'll grant that he was a famously crap teacher...).

Another datum: here in the UK it isn't seen as good to comment on the height of one's own intelligence. It is also not seen as good to comment on how rich you are. The stereotype of the LOMBARD ("Lots Of Money, But A Right Dick") is well-entrenched (and actual Lombards are not rare). There doesn't seem to be parallel concept for intelligence (or many people exemplifying it) -- presumably because the highly intelligent nearly always have the people skills to recognise and obey the social prohibition.

I suspect that the IQ (or whatever) that people have gives them better ability to do *what they want to do*. If, occasionally, someone wants to be a driven, hugely rich, near-sociopath, then intelligence will aid that too.

I present a hypothesis. That intelligence (or rather returns-on-increments-in-intelligence) is not intrinsically asymptomatically limited. Rather, I suspect that the main value(s) of intelligence is effectively navigating to the human world. Just as it's an advantage to be a couple of inches taller than most people, its an advantage to be a bit more intelligent than most people. Just as there's a limit to how tall you can be *and gain more advantage that the costs* (about 6' 3" here in the UK); there's a point where being more intelligent that your peers is essentially pointless (I have no idea what the actual *costs* of being still more intelligent would be -- that's a weakness in my position). If everybody had IQ 200 (in today's IQ points), then the people with IQ 230 (say) would benefit. if the average were IQ 2,000, then the folk (or machines) with IQ 2,300 would be best able to arrange their worlds to their liking.

So, no asymptotic limit, just a cut-off on what's actually valuable in the here-and-now.

We may be (I suspect we are) in the middle of an arms-race (played out over that last few million years and projecting forward to extinction); we *might* have reached some sort of limit-point, but I see no strong evidence of that. We *may* be adding another set of players to the pool; those players *may* be better at whatever-they-want-game-to-be than we are (they certainly will be *if* we can come up with a couple of significant advances in AI (more significant than LLM's -- which I see as, at most, a small component of this hypothetical AI -- perhaps contributing to the UI)

Christina the StoryGirl's avatar

> "Rather, I suspect that the main value(s) of intelligence is effectively navigating to the human world."

I'll sign off on that!

Performative Bafflement's avatar

> Can someone steelman this assumption?

It seems fairly obvious to me that there are threshold effects. But set aside any logistic saturation - just look at empirical effects in the world, and you'll see there are outsize returns to higher IQ / ability in general.

Largely, most economic growth, company creation, patents, and technological progress comes from the top 10-20% of people in a given nation, and it gets more concentrated the more you go up. Arguably, something like 60-80% of “progress” is driven by the top 10% or better.

Ivy leaguers are only 0.5% of the people in the US - yet 20/21 presidents in the last 100 years have been Ivy leaguers. 100% of Supreme Court Justices. 41% of Senators, and 20% of House representatives. 50-60% of federal appellate judges, and 30-50% of state Governers and Cabinet members.

But it’s not just Ivy people!

60-70% of patent authors / holders have a graduate degree (usually in STEM fields), and STEM degree holders are 5-10x more likely to hold patents than non-STEM degree holders. Phd’s file 5x more patents per capita than bachelor degree holders.

Yet the percent of the US that has a graduate STEM degree is only 4-5%.

If you look at the unicorns of the last couple decades, the founders are generally Ivy educated, and from wealthy and connected families. Since just the Ivy league is “0.5% or better,” you can see the rough degree of concentration.

In fact, in general, if you look at normalized Rausch IQ scores versus problem difficulty, solving complex problems gets exponentially more difficult the harder the problem, and you need to go further and further out on the IQ and ability curve to even have a chance of finding a solution.

https://imgur.com/a/LRx5J7u

“This means that for the hardest problems, ones that no one has ever solved, the ones that advance civilization, the highest-ability people, the top 1% of 1% are irreplaceable, no one else has a shot. It also means that populations with lower means, even if very numerous, will have super-exponentially less likelihood of solving such questions.”

https://substack.com/@enonh/p-149185059

We can see a similar trend in normalized IQ versus probability of inventing something, and even this likely underestimates it:

https://imgur.com/a/P8BgxDg

Post about this:

https://substack.com/inbox/post/156565032

Hence, progress being driven most by the extremes of the bell curve in ability and IQ and background.

These have all been <<1%-tier people so far. Now extend this out! Sure, this is the tippy top, but think of ANYONE you know who’s filed a patent or started a company or small business, or done something that impacted a lot of people positively. Odds are, they are smarter, more conscientious, more educated, and from wealthier backgrounds than average - and not just by a little, but by so much they’re likely in the top 5-10%.

Overall, you can see this top 5-10% of people are punching FAR above their weight when it comes to economic growth, company creation, patents, and technological progress, and in fact, this tiny slice of humanity is likely driving the overwhelming majority of those things.

The above are from a post I made on high human capital fertility, you can read the whole thing and see the images and links in situ here:

https://performativebafflement.substack.com/p/high-human-capital-fertility-interventions

User's avatar
Comment deleted
Sep 30
Comment deleted
Wormwood's avatar

It's not just about sample size, it's about genes. And to get better genes, you need smart people to have more kids in order to have more chances of producing something exceptional. We haven't reached the peak of human evolution yet.

agrajagagain's avatar

I'm somewhat (but not totally) skeptical of IQ as a general concept, and I think this illustrates a bit of why:

" An IQ 80 person isn't going to be very persuasive or capable, and is likely to have lots of other "co-morbidities" (very present oriented, limited ability to really consider other people's experiences, etc) relative to an IQ 100 person. "

I remember reading someone--I think it was Nassim Taleb--arguing that most of the spread of human IQ scores was actually cause by pathologies that worsened people's cognitive function, or something like that. And while I suspect he may have overstated the case (shock!), it seems likely to be a significant factor.

Consider, for example, who you would expect to score better on an IQ test: a "naturally" IQ 80 person and a "naturally" IQ 100 person with a severe headache. How about for charisma? I pretty damn sure I'm less personable when I have a headache. Now consider how many physiological maladies might be less obvious and apparent than a headache, but still broadly impact somebody's ability to perform cognitive tasks[1].

If this is sort of thing does play a significant part in determining IQ scores, it would naturally explain a lot of the diminishing returns: the differences at the top end of the scale would largely be differences between "little impairment" and "very, very little impairment." Which might be important when doing very subtle, tricky, sustained bits of thinking like math and science problems, but aren't going to look very different in most other areas of life.

[1] This has been in my thoughts a lot lately, as I've

A. recently had some intermittent health issues that effectively seem to make me dumber when they flare up

and

B. noticed a modest amount of evidence that I might have had these issues for many years, but that they were mostly too subtle to notice (while still being somewhat impairing).

Christina the StoryGirl's avatar

> "Consider, for example, who you would expect to score better on an IQ test: a "naturally" IQ 80 person and a "naturally" IQ 100 person with a severe headache. "

Still the person with the IQ 100 (and especially the person with the IQ 120, or 140), assuming the headache isn't literally physically debilitating.

But more relevantly, I'd generally expect the IQ 100 people to be much better at usefully updating their priors about *anything* under the stress of pain, while the IQ 80 (and especially IQ sub-80) people often don't have the capacity to do that even when they are in the peak of health.

>"[1] This has been in my thoughts a lot lately, as I've

> "A. recently had some intermittent health issues that effectively seem to make me dumber when they flare up "

How many IQ 80 people (for lack of a better term) do you know very well, having interacted with and observed them for a long time?

I'm guessing not many, because you're understandably assuming that an IQ 80 (or lower) person is just a more extreme version of a smart person like you being less-smart under stress. I don't blame you, that's how I used to model low IQ, too, at least until I spent a LOT of time working with IQ 80 (and perhaps below) people and observed over *years* that there are intellectual tools around observation and self-reflection that they simply didn't have and which couldn't be taught.

I've been sitting on a draft of an essay expanding on the comment I made here (https://www.astralcodexten.com/p/open-thread-314/comment/49094023), but it's been a mighty struggle to find a way to explain to people smarter than me that *NO, REALLY*, there are people who are so stupid that smart people can't even model their mental state.

agrajagagain's avatar

"How many IQ 80 people (for lack of a better term) do you know very well, having interacted with and observed them for a long time? "

The concise and correct answer is "I have no idea, since I don't go around handing out IQ tests." The only person whose IQ I can reasonably claim to know is my own, and in practice I'd have to look up a conversion from an SAT score.

However, while I wasn't thinking about it when I wrote my original reply, I actually have quite a lot of relevant background. I worked for many years as an educator, putting in well over 10,000 hours in some mix of tutoring, TAing and teaching very small classes. If I pared that down to just contact hours, and further paired it down to just hours when I was offering direct help to an individual student[1], I expect the total would still likely exceed 10,000.

Permitting myself some unprincipled guesswork, however, I'd estimate the answer to your question to be at "very likely more than 0, though probably rather less than 10." Without particular effort I can recall names, faces and general dispositions of perhaps five people who met all the following criteria:

1. Appeared to me to have a very difficult time with academics generally and with mathematics (what I was most often teaching) in particular.

2. Had been identified by some outside authority as someone needing fairly intensive and long-term help to progress academically. Usually but not always this meant they had an IEP. Usually but not always they were high-school age.

3. Worked with me often enough and closely enough for me to get a good sense of how quickly they learned things, how well they retained things, how these varied between a typical day, a good day and a bad day and how common each of those three were.

A few observations:

A. For most such students, the difference in progress between a good day and a bad day was extremely pronounced.

B. In some (but not all) cases, there were one or more readily apparent reasons why some days saw less progress than others: for example, one student had regular trouble sleeping, to the point that they would often fall asleep in front of me. "Fall asleep in front of me" days unsurprisingly involved much less progress than "reasonably alert" days.

C. I also worked with a number of quite talented students, some of whom displayed similar tendencies to what I describe in A and B. In fact, I recall one very talented student with very similar sleep troubles.

D. Certainly if you compared good-day to good-day or bad-day to bad-day, the mathematically talented students would certainly outperform the struggling students (obviously). But I would guesstimate that a bad day for one of the talented students would tend see around as much progress as a modestly-above-average day for one of the struggling students.

My sense is that we have some STRONGLY clashing intuitions on some combination of "what IQ 80 means in practice" and "what determines someone's performance on an IQ test."

First, IQ 80 is (to my understanding) low but not abysmally low. It's 1.33 SD below the defined mean, which means (if the distribution is properly normalized to the population) around 9% of people have IQ that low or lower. This means that anyone who isn't a hermit and doesn't live in a bubble that's strongly filtered for IQ[2] should know multiple people of around that level. To my understanding, the threshold to be considered to have an "intellectual disability" is IQ 70[3], which 80 is well above. I'll note that I was NOT trained or qualified to work with people with intellectual disabilities, and would never have been put in a position to. So while I'm not comfortable making specific guesses or estimates about any student I worked with, I am comfortable assuming that they were all cleanly above this threshold. But also Scott discusses here how our perception of what people with lower IQ scores are like is *heavily* distorted by their correlation with other sorts of disabilities[4], which don't necessarily hold as cleanly as we expect:

https://www.astralcodexten.com/p/how-to-stop-worrying-and-learn-to

Second, your life outcomes aren't usually going to care whether you had a bad day when you took an IQ test. But they care *quite a lot* about how well you think and learn on average. I talked above about the day-by-day *progress* of students at different ability levels: and while there could be overlap in the individual days of students at quite different levels, the cumulative effects painted a quite different picture. Even over the course of a few months, the difference in how much one student learned vs another student could be immense. So when you say you think an IQ 100 person[5] would outperform an IQ 80 person even in spite of a headache, I find that VERY hard to believe. Maybe I'm just more susceptible than most, but moderate-to-severe pain (well short of "physically debilitating") sure does hamper my ability to think clearly. It doesn't make me forget things I've already learned well (I expect I'd get all the "gimme" questions on a test nearly as well), but if I'm in significant pain and find myself needing to reason through a novel problem, the FIRST question I will ask myself is "can this wait until I'm not hurting?" My understanding of IQ tests--and maybe I'm off base here--is that they're supposed to depend as little as practical on specific accumulated knowledge or the practice of specific skills.

[1] Which is to say, all one-on-one tutoring, and those portions of teaching and TAing in which I was answering direct questions or providing in-depth guidance to an individual.

[2] Which, to be fair, I think many SSC readers do. I'm pretty sure *I* currently do. I just haven't always, and have a lot of experience outside it.

[3] With the shape of the normal distribution meaning that only a fairly small minority of even the sub-80 people fall below this threshold.

[4] With the obvious and oft-repeated alternate interpretation being that Lynn was just a garbage researcher who took garbage measurements. But that should still significantly raise our skepticism that measurements at the bottom end of the distribution are as useful or intuitive as we believe in general.

[5] Note: perfectly average! Probably not great at any of the skills being tested. With regard to math particularly, I think I also have a pretty clear view of how good the average person isn't.

Christina the StoryGirl's avatar

Okay, so I think I owe you an apology.

When I said, "IQ 80 people (for lack of a better term)" that "better term" I was lacking was a politically correct and/or polite way to say "stupid."

That's why I had the parenthetical there, to signal that this was not really a discussion about the objectively validity of IQ tests per se, but rather a discussion about the kind of people who score low on IQ tests and, more importantly, all other tests. I did get around to using the word "stupid" in my final sentence, but clearly by that point, referencing IQ at all was a tremendous distraction.

When I say "stupid" or "low IQ," I'm thinking about a former coworker who deliberately smoked and drank while she was pregnant and gave birth to a child with fetal alcohol syndrome, and who couldn't comprehend the difference between a mortgage interest schedule and compounding interest on a credit card. I'm talking about a different coworker who simply *could not be made to understand* the difference between a health insurance premium, a co-pay, and a deductible, not even with a written guide in front of him and a patient, point-by-point explanation of each term (he ending up saying, "none of this is fair, I'm not paying any of this bullshit anymore, fuck them!"). I'm talking about a third coworker who very literally couldn't problem-solve through *any* minor deviations to his routine, not because he was frozen with anxiety or whatever, but because the ideas for how to solve minor problems simply didn't rise to consciousness. Whether it was a customer asking him a routine question about a policy, or the coffee-maker clogging and overflowing, or deeply cutting into his hand splitting a bagel, he never knew *what to do.* I learned to get between him and customers to answer questions, and to give him very specific instructions in small batches for everything else.

You and I could do our taxes or job on-boarding paperwork or answer essay test hypothetical questions, even with a very bad headache. These three? Not without help, no matter their level of health.

That third person was the first profoundly "low-IQ" (stupid) person I ever got to know very well. This isn't that surprising, to quote myself from the link I provided:

> "It's not their fault; the people writing comments here are almost universally living in highly intelligent "social bubbles" (https://slatestarcodex.com/2017/10/02/different-worlds/). They tend to have highly intelligent family, seek highly intelligent friends, and end up in careers which expose them to highly intelligent peers. They might not *consider* themselves to be highly intelligent - because they tend to socialize with highly intelligent people, they know people who are even smarter than they are - but nevertheless, they're highly intelligent and everyone they know pretty well is highly intelligent, and thus they instinctively model the minds of *everyone* from this perspective.

> "They can't *fully* model what it's like to be truly stupid; unobservant, incapable of dispassionate self-reflection, unable to accurately predict the consequences of a given action, unable to absorb information, unable to update priors. They can't model what it's like to have an entirely different set of motivational priorities based on an *inability* to think, and that's why so many of their suggestions about how to help and/or manage stupid criminals are ultimately unsuccessful."

I wrote those two paragraphs because they exactly described my experience and what I've observed in those like me. I was so in the "intelligent world" that It took me a six months of working with Third Guy full time before I finally understood that he wasn't willfully discarding good ideas to aggravate me, *he wasn't having them.* It took me another six months to stop resenting his need for constant supervision and protection.

I realize that can sometimes sound implausible to people whose "intelligence worlds" are far more closed than mine is. They either don't really believe in their heart-of-hearts that genuinely stupid people exist, or they only understand it on an abstract, surface level, the way people abstractly understand that foreign cultures are very different from their own but don't actually KNOW that until they travel to and spend time in one.

Hm. Maybe the foreign culture travel metaphor can be useful here. Gotta think more on that.

agrajagagain's avatar

This whole post deserves a longer response[1], and I intend to write one. But I couldn't let this pass without comment.

"You and I could do our taxes or job on-boarding paperwork or answer essay test hypothetical questions, even with a very bad headache."

Wow. This...this is a sentence for sure. I think this is emblematic of absolutely everything that is wrong with the worldview that is proudly on display here. So let me say this with complete clarity.

No. No I could not. No I could not do my taxes with a very bad headache. I could not do my taxes with a moderate headache. Whether or not I could do my taxes with a mild headache would depend quite a bit on the circumstances. For that matter, so would my ability to do my taxes with no headache at all. And none of that has really any bearing at all on my ability to perform on things that are broadly similar to an IQ test.

I'm sorry that you've had bad experiences with your coworkers and that those have (apparently) made you jaded. But your view of human psychology is grossly and enormously oversimplified, as (I suspect) is your guesses about the social lives of others. There are more things in heaven and Earth, Christina than are dreamed of in your philosophy, including many, many human minds that *do not* fit the narrow schema that you've tried to define for them.

Doubtless there are some high-IQ-test-scoring people who live either hermit-like existences away from other humans, and know few people in general. Doubtless there are others who have managed to keep themselves siloed away an interact only with people very similar to themselves. But the world is quite a large place, and I tell you quite honestly there are loads and loads of people who BOTH score highly on standardized tests of various stripes AND have some combination of less-privileged backgrounds, breadth-of-experience and intellectual curiosity that ensures that YES THEY DO meet wide cross-sections of humanity. Probably a sound majority of my close friends and family would fall into that category: whatever your bad experiences with other humans, I expect some of them have had worse. Whatever your IQ score, probably some of them have or would score higher. Nor do I have any reason to believe they're especially unique--I run across media suggesting similar combinations of academic aptitude and worldly knowledge quite frequently.

And I tell you frankly, I none of them would have a high opinion of what you've written here. Not one.

[1] Oops, I guess that ended up pretty long for what was supposed to be a quick aside. But it still didn't really touch the main points I wanted to make.

Christina the StoryGirl's avatar

Don't bother writing a reply.

I can see that you very obviously haven't had multiple, long-term relationships with the kind of low functioning / barely functioning people that I'm talking about, who can't do stuff LIKE, FOR EXAMPLE, THIS IS NOT A COMPREHENSIVE LIST, NOR DOES IT REFLECT THE LACK OF ABILITIES OF A SINGLE PERSON, American taxes, job on-boarding paperwork, complete an essay in response to a hypothetical question, comprehend their obligations under American healthcare, know all the steps to take when they get a deep cut in their hand at work (wash the wound, put pressure on it, check the wound after a time, understand that if doesn't stop bleeding, it requires a trip to urgent care / ER, call the supervisor to tell them you're leaving mid-shift and they'll have to find emergency coverage, etc).

I can see that because you said it, gave examples of how some people performed in math tutoring sessions with you (!!!), and because you are also apparently focused on the edge cases of smart people like you, who can write comments like these and tutor others in math but would somehow be incapable of doing your taxes with or without a headache.

(For what it's worth, the three people I wrote about above and a fourth I'm thinking about now would not be interested in reading the exchange we're having, or pretty much any other on ACX. If they were forced to read it (and one would not be able to), they would not be able to pass a reading comprehension test on the discussion with questions like summarizing our respective positions and then quoting sentences we wrote to make supported rational speculations about our respective backgrounds. If it were read aloud to them, they wouldn't be able to follow.)

Please read the Different Worlds essay on SSC and realize you are privileged to be in one, my friend. Those friends and family in your social bubble who would take a dim view of my observation that there are stupid people out there who are so stupid that smart people veryliterallycan't model that stupidity (and thus, don't really believe they really exist) are likewise in a social bubble free of stupid people.

You don't get it.

And that's okay! I didn't either until I started working with them, and I freely admit my social bubble made me so naive that it took me months of observation before I started to understand that there are people out there who are meaningfully not like me - or you, for that matter.

agrajagagain's avatar

The observation that Taleb is a bombast who frequently gets out over his skis and wildly overstates any point he is trying to make is hardly novel, and not one that really needed to be made as such length. It gets quite boring and repetitive after a while; I ultimately stopped reading perhaps a little over halfway through, as it seemed a waste of time to continue.

As to the object-level question I was discussing, this post seemed to touch on it only lightly and only so far as needed for the author to talk more shit about Taleb. Meanwhile, the degree to which the author apparently *doesn't even notice* the underlying issues that would feed into that point was pretty irksome.

As a more general matter of courtesy, I think you should consider that replying with nothing but a link tends to suggest that the link is highly and directly relevant to the issue being discussed (which is not the case here). If you want to call attention to specific parts of a longer post, you can mention which ones and where to find them in the comment. Likewise if you have your own thoughts or responses, by all means type those as well. But there are many times more things to read on the internet than any one person could hope to digest, so dealing solely in long, tangentially-relevant links is not very respectful of the time of others.

Mallard's avatar

>I remember reading someone--I think it was Nassim Taleb--arguing that most of the spread of human IQ scores was actually cause by pathologies that worsened people's cognitive function, or something like that. And while I suspect he may have overstated the case (shock!), it seems likely to be a significant factor.

That doesn't seem to mostly be the case. While syndromic retardation exists and is distinct from familial retardation (see here on the distinction:

https://www.cremieux.xyz/i/153828779/countries-cant-have-mean-iqs-in-the-s

), cognitive benefits are quite apparent while moving rightwards on the intelligence distribution, even independent of syndromic shortcomings.

>If this is sort of thing does play a significant part in determining IQ scores, it would naturally explain a lot of the diminishing returns

There seems to be little evidence of diminishing returns of intelligence, overall, see: https://www.cremieux.xyz/i/100782605/nonlinearities-in-the-relationship-between-iq-and-income.

And this section of the article: https://hereticalinsights.substack.com/i/140130396/nonlinearity-and-decoupling-a-problem-that-is-not, e.g.:

>Brown et al. (2021) used data from four longitudinal cohort studies with 48,558 participants in the United Kingdom and United States from 1957 to the present for the the relationship between cognitive ability measured during youth and occupational, educational, health, and social outcomes later in life, and found that most effects followed a linear trend.

Indeed, if anything, the opposite is often the case, with growing returns to intelligence.

See: https://humanvarieties.org/2016/01/31/iq-and-permanent-income-sizing-up-the-iq-paradox/ which notes the benefits of a log-income/IQ model, in which each additional IQ point corresponds to a percentage increase in income, which would of course grow in absolute terms at higher IQ levels.

Robb's avatar

There's IQ, and there's all sorts of other qualities that affect, and are affected by, IQ to give you general effectiveness. The easiest thing is working memory. Imagine if you had a 100 IQ, but a working memory that could hold 256 concepts as easily as you hold 7+-1 today. I think that'd leave a 140 IQ person in their dust.

Now that I write that, it might be just as impossible for that size of working memory to occur in a human brain (diminishing returns again) as it would be for a 200 IQ.

Peter Defeel's avatar

However computers gave exceed that limit since the beginning. And yet. Here we are. Still not overrun by robots.

Peter Defeel's avatar

I don’t think you can measure IQ like that. It’s a mapping to a standard deviation. Beyond 160 it kinda breaks down. ChatGPT assures me the smartest person out of 8 billion would max out at 193 but that’s not measurable anyway.

Cry6Aa's avatar

To your point, it's interesting that neuron counts for large animals with big brains (elephants and whales) seem to show that their brains are a lot less dense than ours. Meanwhile our brains aren't very neuron-dense compared to, say, a crow. Which seems to imply that there's an architectural limit of some sort which prevents large, neuron-dense brains. My suspicion is that we're out on the bleeding edge of that envelope (we have large brains that are also more neuron-dense than they should be) and that this is part of the reason why our minds are so unstable and prone to weird failure modes.

Gerry Quinn's avatar

Birds need to have compact light brains, because they have to be able to fly. Some of them grow parts of their brain in the season when they have to sing, and let them atrophy after.

Elephants probably don't care how big their brain is.

We have space limits and we also need a lot of compute.

Cry6Aa's avatar

But that begs the question as to why our brains aren't denser, because if our brains were as dense as corvid brains then the space issue goes away. My point is that there are case-by-case explanations, but the overall trend seems to be that you can have small, dense brains or big, diffuse ones but not both. And the result seems to be that there's an absolute number of neurons in a given brain that it's hard to exceed. I'm not putting this forward as some sort of general hypothesis, mind, just an observation that seems to gel with the OP's point.

KM's avatar

I've heard some people say that chess super-GMs aren't necessarily that high in IQ. Now, they clearly have some sort of outstanding spatial intelligence/ability to understand a sequence of moves. I've never taken chess all that seriously, but for me it's incredibly difficult to visualize something a whole bunch of moves in advance, even if the notation is listed. When I see Hikaru or someone on that level spit out 10 moves in a row, I'm completely astounded; but if I spent a ton of time studying chess, I'd expect my abilities in that area to improve. Also, I know there have been studies of chess players and their ability to memorize the positions of pieces on the board, and they do much better when presented with realistic positions and not just randomly scattered pieces. They're clearly learning how to chunk the pieces into units and memorizing those units. I think chess ability is not quite as correlated with IQ as you might think.

I'm sure there are a fair number of chess prodigies who have demonstrated great accomplishments in other areas, but some of them, just like Nobel winners, have been cranks. Speaking of the Nobel, didn't we just have a Nobel disease discussion in one of the other open threads? Some people, for lack of a better word, are a little bit crazy, no matter how intelligent.

But I think you are somewhat underrating the intelligence of politicians. A lot of them were Rhodes scholars, valedictorians, etc. For example, even people who hate Ted Cruz almost always agree that he's brilliant. I'd wager that most of the 535 reps and senators have an IQ over 120, maybe even 130.

Also, I'd dispute the idea that 120 is "genius" level. That's barely one standard deviation over the mean. Something like 5% of the population is over 125, and I wouldn't say everyone in the top 5% is a genius.

Now, I think you're right that the effects of IQ mostly level off at a certain point, and other factors play more of a role in success. But I think that at least up to 130, maybe even up to 140 or so, there are still pretty considerable gains to be had. I've taught high school for 15 years, and there's usually a pretty noticeable difference between the kid that gets a 35 on the ACT (99th percentile) and someone down around 30-31 (roughly 95th percentile). I'm willing to bet that on average, the life outcomes of the kids getting a 35 are quite a bit better than the kids getting a 30.

bell_of_a_tower's avatar

Sure. There are differences. But each 10 points is a *reduced* effect compared to the last 10 points. And that's the critical bit--you can be as super-smart as you want, but if the returns asymptote to 10% better...and especially don't generalize to all areas...

And I'd strongly push back against the idea that politicians are genius level. Smart (as in above 100), probably. But mostly they're just *polished*. And that doesn't actually take much smarts, just practice and preparation.

User's avatar
Comment removed
Sep 29Edited
Comment removed
Peter Defeel's avatar

> IQ, actual IQ, tends to generalize very very well -- to HARD problems. It is notably weak on actual intelligence tests (my friend the genius used "mind reading" on one segment (aka anticipating what the tester would say before she said it), because he was so bad at the actual component being tested).

People are using IQ in a very confusing form here. What’s measured in tests is IQ. What’s that a proxy for is most often called g (ie general intelligence).

John's avatar

Ted Cruz graduated from Princeton and Harvard Law where he edited the Harvard Law Review. Base rate analysis suggests that his IQ is very high, whether or not you agree with his politics (and I do not).

bell_of_a_tower's avatar

It generalizes to *intellectually-accessible* hard problems. Not all hard problems are, in my experience, suitable to solution via thinking really hard. Most interpersonal problems aren't--in fact, thinking too hard can actively be a detriment in those.

Jim Birch's avatar

Most of us can't multiple 5-digit numbers in our heads. Some of us can do it stepwise, using a learned algorithm. Most LLMs cannot reliably even add 5-digit numbers but can maybe pass the bar exam. A $5 chip can add, subtract, multiple and divide 5-digit numbers in microseconds.

A hard problem is relative to something. High IQ individuals are statistically better at problems that are hard for humans. "Dumb" animals beat us at all kinds of stuff but have very limited capacity to generalize, eg spatial processing to intercept prey doesn't enable geometry or calculus.

WoolyAI's avatar

Whether returns to intelligence are linear are not is not really relevant to ASI but it is relevant to takeoff speeds. As long as AGI/ASI is possible and progressing relative to current tech trends, the returns to intelligence aren't relevant over historical timelines.

So, let's lay out a very basic scenario. Assume IQ works from a base 80 and scales logarithmically, so at IQ 80 we've got the equivalent of a dumb person, at IQ 800 we have something equivalent to the smartest person alive, and at IQ 8000 we have a low-level superintelligence. Let's also imagine there are no significant methodological improvements or anything, the IQ just advances in line with Moore's Law, doubling every two years. We basically just keep running the same models with more and more transistors and there's no feedback loops. And tomorrow OpenAI releases the world's dumbest proper AGI, at IQ 80.

So in 2029 the IQ of our dumb AGI is *320* and it's getting around average human level. In summer of 2032, it passes the smartest person ever level, and by September 2039 we have a true superhuman ASI.

And these are really conservative estimates about the returns to intelligence and it lacks any feedback loop, where Moore's Law doesn't speed up development when we have millions of AI agents as smart as our best minds working on better GPU units or something.

As long as AGI/ASI can grow relatively in line with software/computer growth/improvement, that will overpower any low returns to IQ just through compounding improvements in (historically) short timeframes.

bell_of_a_tower's avatar

Except asymptotics are asymptotic. In this model, you can't *ever* get above X% higher *no matter how much effort you throw at it*. Logistic is not logarithmic--in a standard scaled logistic curve you can't get above 1 (arbitrary units), you can just get arbitrarily close.

And I see no evidence anywhere that self-improvement via ai is actually meaningfully possible.

WoolyAI's avatar

Oh god bless, reading is hard. My bad.

I think the standard Bostrom answer is that since we don't see declining returns to IQ in subhuman intelligences, like bug->rat->cat->monkey->human, why would we assume some ceiling right around human IQ.

bell_of_a_tower's avatar

Yeah, if you look at the left side of a logistic curve, you see increasing returns. That's the whole point. Going from (conceptual) 10 -> 20 -> 30 shows improved returns. But logistic curves[1] always turn over. And my experience has been that there is already diminishing returns at "human scale" just going from normal -> smart -> really smart.

[1] The number of increasing faster-than-linear, positive feedback-loop processes in nature is really small and carefully constrained. I see no reason to believe that intelligence is one of those. Most of them are logistic instead. So the default assumption is that it's logistic.

WoolyAI's avatar

Your personal experience is on the wrong scale because it's on a human scale. The dumbest human is still in the top 99.999999999999% of intelligence of all living beings in earth history. The median IQ on a scale of all living entities ever isn't a dumb person, it's like a frog or something. We do not begin to see declining returns to intelligence as we go from frog to lizard to cow. Your observations about declining returns to intelligence are focused on the extreme right end of the distribution.

No one is going to look at human intelligence, which in the evolutionarily trivial period of 100k years has become so dominant that we've become an extinction event for other species on par with the dino meteor and go "Yes, we have clearly reached diminishing returns in intelligence." If you keep it to human scale though, I will concur, I often see minimal personal returns to IQs over like 130. That's real. But the right scale is all intelligence, not just human intelligence. And a graph of all intelligence is not a logistic curve, it's an exponential curve at exactly human intelligence that collapses at exactly about as smart as human lawyer and would look absurd if you actually drew it.

Loominus Aether's avatar

I'm skeptical about logistic being the default. I'd certainly cede that when there are finite resources at play, conditional on a fixed level of tooling/technology/effort/etc.

But the purported logistic curve for oil extraction got blown out of the water when we discovered fracking. I'm sure there's still *A* logistic curve under there somewhere, but it looks nothing like the one we imagined.

Same spirit, there might be resource limits on intelligence, and probably are on wetware like brains. But I wouldn't take a priori that those limits are the same for silicon.

Peter Defeel's avatar

In reality there’s a limit that is being reached as we speak in compute. Therefore gains have to come from elsewhere.

Transistors are nearly as small as they can get, training frontier models already consumes gigawatt-hours, and doubling that every two years would demand something like the output of a national grid. The cost of new fabs runs into the tens of billions, so the money is as much of a constraint as the physics or the power. Moore’s law was steady while it lasted; what we face now are plateaus, where real progress depends on smarter algorithms and efficiency rather than brute-force silicon or electricity.

(For context, GPT-4-class models are thought to have used on the order of several gigawatt-hours to train. If that demand doubles every couple of years, by the early 2030s you’d be talking terawatt-hours for a single training run — the kind of consumption that starts to match or exceed the annual electricity usage of a small country.)

MKnight's avatar

WoolyAI’s answer seems to better address your question than mine. If you feel like the standard Bostrom answer fails because we’ve reached some asymptote, I can’t prove you out of it. But I can ask you to imagine a similar conversation between chimps about whether or not there can be an intelligence greater than theirs. When the idealist notes that chimps are smarter than bugs, the cynic might say “if you look at the left side of a logistic curve, you will see increasing returns”. They might observe that the smartest chimp can make sentences only slightly better than the average chimp. But they’d be wrong about a logistic pattern to intelligence. Or at least, the logistic curve that applies on the level of species is very different than the one that applies to a single species.

Doctor Mist's avatar

It still seems hard to see why, as a fact of nature, the curve for intelligence would happen to flatten out right at human level. It certainly isn’t hard to *imagine* how useful it would be to be even as smart as ten von Neumanns working together in a room, and no particular reason to imagine even *that* is the upper limit.

If you perceive a flattening in human intelligence, it might be because we have evolved for group behavior, and isolated far-right-tail individuals are hampered by the paucity of peers to collaborate with. Or that intelligence is a recently-evolved attribute of humans that is still associated with various other cruft in the genome (like lack of charisma?) that interferes with what the individual can accomplish. Or that our monkey-nature makes sure that when a peg gets too high it’s pounded down.

It might be that AIs created by training on human text/behaviors can’t ever exceed previously displayed human abilities. But again, it’s hard to imagine what kind of law of nature would guarantee that. A collection of only slightly above human-average AIs might be free of Dunbar’s limit, for instance, and able to cooperate directly with more peers, accomplishing more than a similar collection of human AI researchers could.

bell_of_a_tower's avatar

All of these answers seem to be arguments from incredulity. "Hard to imagine" isn't a convincing argument.

And the idea that "all life" is the right scale just seems to be assertion, rather than argument. It assumes that intelligence can be meaningfully generalized as a single scale running over very disparate creatures, which is a massive smuggled assumption/stolen base.

Wimbli's avatar

120 IQ is squarely in the range of midwits. The sort of people who don't use Quantum Effects in their protein folding calculations. For twenty years! That's twenty years of research down the drain, and this is not exactly a sign of "actually high intelligence." Many years of stupidity, of PhDs showing a complete and utter lack of understanding that "maybe we're doing something wrong"?

Chess is another midwit sport. They do not have very high IQ, as they are subject to trolling, in general, by people less good at chess but smarter than they are.

There are absolutely tons of highly charismatic "super smart" people. You don't know about them because they are either comedians (you can't tell me the good comedian isn't a shmart guy, because I know he is), or too busy doing ten different jobs under ten different pennames.

Assume you lack the capability for metaintelligence. People more than about 10 iq points higher than you, simply look "absurdly lucky."

Peter Defeel's avatar

120 isn’t, by definition, midwit. Unless you have a very broad definition of mid. It’s about the equivalent of 6ft 1 1/2 in height.

User's avatar
Comment removed
Sep 30
Comment removed
Peter Defeel's avatar

> Midwits are "in general" smarter than 100.

Again by definition a mid intelligence would be around 100. I’m not certain you understand the nature of the IQ scale, it’s a relative scale.

I think you are making up your own definitions here. It’s not that common for people at 1.5 SD above the median to actually believe any of that.

That said I do believe that there’s a step change for the top level of intelligence - which isn’t measured in IQ which assumes a bell shaped curve, but that’s hard to measure. By their fruits shall ye know them.

It’s not that radical to say we are all stupider than Von Neumann

User's avatar
Comment removed
Sep 30
Comment removed
Peter Defeel's avatar

Well indeed trusting the science is not itself a scientific idea, of course.

I’m still unsure who you think are the brightest people on the planet except for a few mediocre comedians and your friend who was joking about winning the Nobel prize for world of Warcraft.

bell_of_a_tower's avatar

Can you give an example of someone who is both highly charismatic and *by normal measures* super smart? Intelligent (ie over 100 IQ), sure. I'll buy that. But *genius level* (however you define that)?

Ad I think your causality is backward--you'd need to show that *every* (or *most*) geniuses are also super-charismatic, not that *some* are. What I'm questioning is that IQ *causally predicts* charisma. I believe that the two are actually uncorrelated above some relatively low threshold. Which would totally be fine with "there exist super smart, super charismatic people". But wouldn't be fine with "if AI gets super smart, it will therefore also be super charismatic".