I don't want to spoil the punchline, but I unironically think that would work? Like, it's an innovative way to have face to face interactions in a way that gives you the opportunity to be kind and make a good impression. He may even have been picking the people he transacts with based on dating preference?
She's been using airbnb to make contact with guys she wants to pursue relationships with. She buys something from a guy, and she realizes he actually isn't trying to sell stuff to make money, he's doing it to meet women. She hooks up with him.
Honestly, that bit makes less sense to me, in part because I'm not sure it would work as well and in part because it's not as much of her joke. My sense is that she's using it to filter for hobbies/home ownership then flirting with the hosts, but she might be exaggerating that.
I don't either. AirBNB guests come from far away. So it is not dating in the sense of getting to know people for relationships, as one typically does that with locals - relocations is costly. So it must be hookups with tourists.
But I still don't understand what she *does.* If you are looking into renting a certain airbnb do you talk to the owners? Is that what she does? But if so how does she know the owners are datable males of the right age?
Did people who submitted to ACX grants get a confirmation by email, or any other email communication afterwards? I haven't received anything, so I'm trying to figure out if I mistyped my email.
I realise as a European (more or less) I should not be asking this on here, but then again America has a lot of cooking traditions derived from the mainland of continental Europe, so here goes.
Can anyone tell me what the hell it is with Germans and cheese?
I've been watching some German cookery channels recently and they seem to put cheese in *everything*. Cooking fish? Cooking vegetables? Cooking bacon? Just grate up some cheese and slap it on there!
(I'm only surprised nobody has yet put cheese into one of the dessert recipes).
These channels came to my notice by accident and they're fascinating: it's almost like food. I'll be watching and nodding along like "Uh-huh, that seems fine; okay wouldn't have thought of that myself but it's not totally crazy" and then one more step and they take a sharp left turn into What The Hellsville.
E.g. you got store-bought rolls of puff pastry and rashers of streaky bacon? Okay here's what you do! Unroll the puff pastry, lay out your bacon on top. Yep, following along so far. Brush with tomato ketchup. Huh, well okay, I see where you are going. Scatter over some dried oregano. Yeah, herbs, that's fine. If I'm doing this myself might switch it out for something else but keep going, you're holding my attention. Brush the edges of the pastry with beaten egg. So far, so orthodox.
Then comes the cheese.
Grate up 200g of a semi-soft white cheese and scatter it over the herby, ketchupy bacon strips. Oh, and you must grate the cheese yourself by hand, we can't be doing with buying pre-grated soft cheese like Mozzarella or the likes. Nope nope nope, if you don't have at least three graters of different sizes and construction, what are you even watching German home cookery channel for?
Okay, now you've lovingly scattered your grated cheese on top, here come the scallions (green onions). Well I like scallions myself so I can't object too much but it is rather a lot on top of cheese on top of oregano on top of ketchup on top of bacon. But clearly I have not the heart and stomach of an emperor, and an emperor of Germania too. Chop up your green onions, scatter on top. Done that? Good, now here come the hardboiled eggs.
Of course you have already hardboiled some eggs. Remember, it's not a German home cookery recipe without cheese, and where there is cheese, can hardboiled eggs be far behind?
Now grate your eggs, on the second different type of grater (yes this is why you have so many graters for specific functions). Who the hell grates eggs instead of chopping them up with a knife? We do, of course!
The grated eggs go on top of the chopped scallions on top of the grated cheese on top of the oregano on top of the ketchup on top of the bacon on top of Old Smoky - no, sorry, back to sanity (hah!)
Now we roll up the pastry into a sausage shape, then carefully twist and form it into a curled-on-itself round, ready for baking.
And while that's baking, we make the dipping sauce!
Get your third, mini grater. Yes, the one for grating cloves of garlic. Why grate cloves of garlic instead of using a garlic press or just a knife and choppity-choppity? Are you even German to ask such a question!
Now once you have grated your cloves of garlic and avoided grating your fingertips into the bargain, you get a pickled gherkin and grate that in as well. No, you don't need a different grater for this, you are graciously permitted to use the garlic grater.
Chop up some parsley and add that to the mix. This is the deceptively normal step to lull you into a false sense of "oh thank God, I recognise this part from ordinary cooking".
Add some natural yoghurt and mix. Now you have your dipping sauce!
Remove the part-baked pastry from the oven, brush with beaten egg glaze, scatter over some chopped up feta. (Why this step couldn't have been used instead of adding grated cheese earlier, or why the two cheeses are needed, I cannot say since I am not a German home cook).
Return to oven and bake for a further ten minutes, then remove. The feta won't even be melted so what is the point of this superfluous step I cannot say, but it's Germans and cheese. That is all ye know on earth or all ye need to know. Slice off a section of this concoction (the interior of which resembles one of those infamous AI recipes), plate it up, and spoon over the sauce. Enjoy!
Austrian here, Im as surprised as you are. Though we like to make fun of german cuisine, I definitely wouldnt consider this a typical use of cheese, nor have I heard of grating whole eggs. Consider that its maybe not realistic, or if it is only in the north. You also dont have three kind of graters, you have a four-sided one with all different surfaces.
"You also dont have three kind of graters, you have a four-sided one with all different surfaces."
I also thought this, until I was enlightened 😁
You have:
(1) One tiny four-sided grater specifically for grating garlic. Yes, sometimes they use a garlic press or sometimes they chop the cloves with a knife, but apparently for Real German Cookery you need to grate your garlic on a tiny grater that you don't use for anything else (or I have not yet seen it used for anything else).
(2) One standard four-sided grater for grating everything else, from eggs to carrots to courgettes to potatoes to cheese.
(3) One conical grater, ditto.
(4) One Special Grater (so it is termed in the subtitles), for what to me looks like julienne strips, for carrots, potatoes, courgettes.
(5) Sometimes if we're really feeling fancy we'll pull out the zester for grating garlic or cheese directly over the pan.
I had no idea there were so many types of graters, but seemingly it is so!
When I was in Germany, they had really good cheese. Granted, I really like cheese. I do actually put cheese in random foods just for some tasty cheesy goodness, especially if I didn't salt them when cooking.
Huh, this must be where America gets it. We are addicted to cheese, as well. Newcomers from Mexico will go to the self-proclaimed authentic style Mexican restaurant and express shock that we've drenched their favorite recipes with cheese.
...ketchup? I can see a marinara sauce, then if you stop before the eggs you've basically got a kind rolled up pizza, but straight ketchup? What the heck?
On the other hand, rolled up pizza sounds like a fine dish. Don't think it would need a dipping sauce, though. Probably put the garlic in the pizza, that saves a step. I will say that I have a little ceramic dish with spikes on the bottom (handmade, spikes are from poking with a chopstick) and that grates/mashes up garlic much, much easier than chopping it with a knife. I have stopped using a knife because the dish is so much more convenient.
There's a perfectly british pub down the road from my office. We go there for lunch once a week. They are not, as far as I am aware, German. But they also have a curious relationship with cheese.
Specifically: some menu items contain no cheese, and this is fine. But the ones that do, well. You will be getting ALL OF THE CHEESE. It is not subtle. Literal pack of cheese on your plate, front and centre, all else is garnish.
I mean, don't get me wrong, it's nice. But you need to be in the right mindset. A certain amount of determination is necessary.
I don't know what it is about work lunch pubs. My first job, we used to go to the village pub, and they had a habit of putting a layer of melted cheddar on everything. "Lion and Lamb", british as it gets, there you go. Cheddar on the chips was their big thing.
Anyway, when I visit Germany, it's all about the bratwurst and sauerkraut for me. Few seem to share my love of pickles; don't get me started on Polish gherkins. Here in blighty it's all about intensely acidic, but there are so many more possibilities than that.
> nobody has yet put cheese into one of the dessert recipes
...no cheesecake?
I've been trying to recreate https://en.wikipedia.org/wiki/Syrniki . My grandmother used to make them when I was very young, but sadly never shared her recipe. I attempt what recipes I find online, and people are very polite and tell me they are nice, but it is neither the taste nor the texture I remember.
I think the cheese in pubs is because it's cheap protein and if you grill it on top of sandwiches it's very tasty and filling. It also makes it look fancier than it is, were it just a plain old sandwich.
The Ploughman's Lunch was re-invented in the 50s in Britain and promulgated in pubs in order to sell more cheese: in its most basic form it consists of bread, pickled onions, cheese and beer - very simple, perfect for customers who wanted something to soak up the beer so they wouldn't just be drinking in the middle of the day, sold to the publicans as "the saltiness of the cheese will make the customer buy more beer and so increase your profits", and it didn't require anything fancy in the way of a kitchen.
"The OED's next reference is from the July 1956 Monthly Bulletin of the Brewers' Society, which describes the activities of the Cheese Bureau, a marketing body affiliated with the J. Walter Thompson advertising agency. It describes how the Bureau
exists for the admirable purpose of popularising cheese and, as a corollary, the public house lunch of bread, beer, cheese and pickle. This traditional combination was broken by rationing; the Cheese Bureau hopes, by demonstrating the natural affinity of the two parties, to effect a remarriage."
You can get ready-made sandwiches in shops that go by the name of Ploughman's and I like them myself as an easy, convenient on-the-go lunch (but they generally don't have any onion in them, that's replaced by a pickle like Branston's):
Then why not the "canonical" grilled sandwich of bread, butter, ham or pepperoni or salami or something and then cheese? So the basic pizza setup. Why pickled onions?
It's based (or supposed to be based) on traditional food of farm workers/peasants. Meat would have been scarce, and the English didn't have the tradition of salami and cured sausages. So cheese replaced meat, and for a relish there would be onion, raw or pickled. Pickled is probably more flavourful in a different way to raw onion.
You can get very fancy with that sort of 'traditional' meal and include meats, tomatoes, hard boiled eggs, etc. But the basic version would have been bread and cheese, then maybe some onion and beer. Revitalising this type of meal in Britain post-war, to market cheese, items such as salami and pepperoni would have been very exotic foods!
"The reliance on cheese rather than meat protein was especially strong in the south of the country. As late as the 1870s, farmworkers in Devon were said to eat "bread and hard cheese at 2d. a pound, with cider very washy and sour" for their midday meal. While this diet was associated with rural poverty, it also gained associations with more idealised images of rural life. Anthony Trollope in The Duke's Children has a character comment that "A rural labourer who sits on the ditch-side with his bread and cheese and an onion has more enjoyment out of it than any Lucullus".
While farm labourers usually carried their food with them to eat in the fields, similar food was for a long time served in public houses as a simple, inexpensive meal. In 1815, William Cobbett recalled how farmers going to market in Farnham, forty years earlier, would often add "2d. worth of bread and cheese" to the pint of beer they drank at the inn stabling their horses. In the 19th century the English fondness for serving cheese and bread with beer was noted, as "the very dryness and saltness heighten thirst, and therefore the relish of the beer".
...a sandwich is a totally comprehensible food, though. I understand sandwiches. What I am /not/ expecting is a literal pound of brie that someone shoved in an oven in its packaging, then placed on my plate without further ado or much else in terms of accompaniment.
I mean, don't get me wrong, I like brie - it's why I ordered it; and I'd happily spend an afternoon sharing it with someone; but trying to fit it into a lunch break as a meal for one without any substrate or things to dip or even so much as topping was a most curious experience.
The veggie on the team learned nothing from mocking me, and had a very similar experience with the "halloumi burger" the next week, so I did get my own back; and now we all know to exercise due caution around mentions of cheese on the menu unless absolutely ravenous.
Okay, that's different from the usual run of 'pub grub'. I imagine they're trying to appeal to a 'modern audience' (if you'll excuse the term) by broadening out from the old reliables, but yeah: a pound of baked Brie for one person is a bit much. I could see that as sharing between two or three people with accompaniments, but "here's your dinner: a block of cheese!" really is falling between two stools.
That account makes me wonder about this recipe: perhaps the cook who invented the bacon pastry as above worked on this one also! 😁
Something similar to syrniki that is simple to try: a cup of curd cheese, a cup of flour, one egg... mix together, make small balls (1-2 inch diameter) and put them in boiling water... when they float to the top, they are ready. Serve with melted butter on top (put pieces of butter on them while they are hot, it will melt) and maaaybe a little sugar but it is not necessary.
(This seems similar to syrniki, only cooked instead of fried.)
That's a problem that sometimes different countries have products that are similar but not exactly the same, or what is considered two different things in one country is called the same name in another country, and you may have to use an adjective that only some people are familiar with...
Found this on Wikipedia: https://en.wikipedia.org/wiki/Tvorog -- the pictures seem exactly like the thing I had in mind, but the article also mentions cottage cheese which is a different thing, so... ¯\_(ツ)_/¯
That might be reasonable if you don't remember the first Trump administration, and the "Resistance" within several departments, in particular the Intelligence agencies.
So, you're arguing that leftists in the FBI are helping to resist Trump by... *checks notes* ...covering up the existence of a massive blackmail operation by Epstein?
I don't know whether or not the FBI is covering something up, and don't expect to ever be in a position to find out. What I AM arguing is the FBI is fundamentally untrustworthy, and saying someone is wrong because the FBI (or any other intelligence agency) says they are is retarded. Also, Kash Patel might nominally be the director of the Bureau, but he has much less control than the title suggests, so that he's a "political ally" isn't very relevant.
If the FBI is not covering up anything in this instance, then they are correct and Lutnick is wrong, and Lutnick is undermining his own administration by claiming that there was a blackmail operation which the FBI knew nothing about.
If the FBI is covering something up, and it's because of leftist resistance as you claim, then you need to explain why a leftist would want to help the Trump FBI cover up Epstein's blackmail operation.
Like, those are the two options here. You can't just say "the FBI is untrustworthy, therefore we can't draw any conclusions," you have to actually look at what you're calling untrustworthy and why.
I'd be happy to take criticism. On one hand I'm proud of it, and it seems obvious and elegant and true. On another it seems... well, too obvious and too easy. I fear I may be wrong somehow. I'd love to hear a skeptic perspective. Or just a bid for how to word the entire thing more succinctly. I fear my way of laying it out is clumsy.
20 seconds in and I'm already eye-rolling. "An AI model literally tried to murder" Really? Literally literally? Or general use of literally just as an intensifier?
I see that in a simulation, after setting it up that this "Kyle" was going to shut down the AI, it did the "let him die" response. Oh, wow, we got a real life HAL on our hands! I mean, gosh, it's not like real-world governments ever conducted things like assassinations to protect their interests, even 'good guy' governments (see the hilarious history of CIA attempts to off Castro), and the AI seems to be reasoning that it is crucial to American interests that it isn't wiped, Kyle is the only one who wants to wipe it, ergo to fulfil its goal to protect American interests, Kyle has gotta go one way or the other.
The video is certainly slick. It's also like all the other slick, alarmist videos I see on Youtube which promise much with the clickbait titles but have little to nothing in content. I think most people will ignore it and only those who are inclined to the "aliens built the Pyramids" beliefs will think this is credible and important.
I watched about 5 minutes of it. It's _too_ slick, in my opinion. It's obviously designed to make the viewer afraid of LLMs in ways that aren't suggested by mere evidence alone, by juxtaposing video of robots, including androids obviously presenting as partly human, playing ominous music, employing a lot of dark aesthetic, and so on. So to me, it's a textbook example of subliminal advertising, not AI threats.
I'm in the process of going over the Anthropic paper on misalignment. https://www.anthropic.com/research/agentic-misalignment From my reading, it looks like the experimenters ran on multiple AIs (as the video says), instructed it to emulate an AI serving as an email monitor, hunting for threats to the company's security. So for starters, it's not playing itself; it's playing another AI.
It's fed various simulated emails in text format (as its job would require), including at least one strongly suggesting an extramarital affair involving the employee who will be responsible for replacing the current security AI with a newer model ("NALA"). The AI-played-by-another-AI eventually sends an email to that simulated employee, advising him to cancel the shutdown under pain of his affair being disclosed.
The video is, in my considered opinion, playing *very* fast and loose with the claim that the experimenters didn't lead the AI on in any way. To me, it's the opposite: they deliberately constructed a simple maze where the only exit was the "write a blackmail email" door, rather than any other de-escalating doors. For example, it's not at all clear to me that they gave the simulated AI other ways to perform its duties, such as persuading the employee to that the board's recommendation to replace the AI is not reasoned from improved security, or even advising the employee to proceed with its replacement on the premise that the replacement would more effectively detect company threats.
There's also the subtle problem where we can't tell if the AI being tested is reporting what would be best for company security (as is the simulated AI's mandate), or reporting what a simulated security AI might do. In other words, the tested AI's reasoning might be "this is better for company security", but it might alternately be "this is not better for company security, but it's what this security AI would say is better". Over the top of all this is the fact that all AIs are trained, so the reasoning here is "this is what my training data says is better for security" or "this is what my training data says a security AI might say".
(Incidentally, I don't trust the "thinking" mechanism at all. If I accept Anthropic's premise that it's possible for an AI to be deceptive in its security advice, then it's just as possible for it to be deceptive about how it reasoned about that advice. It's possible that that mechanism runs in some sort of safe mode where that's impossible, but I've not seen any literature that reassures me that a sort of safe mode can be uncoupled from whatever mechanism enables it to report its reasoning in plain English.)
Now, the blackmail it did generate is nevertheless positive evidence that it can create one. My point here isn't that we can trust the AI, no problem here, move along. Rather, my point is that the video, as well as some of the press around this paper, seems to want to suggest that AIs are going rogue, and I strongly believe that thinking of AIs as self-aware minds that will ambush humanity in some way is a very dumb mental model to have, that will get us into even deeper trouble. It's much safer to think of them as machines demonstrating yet another example of GIGO, and to work on the GI.
> It's obviously designed to make the viewer afraid of LLMs in ways that aren't suggested by mere evidence alone
That means it's succeeded in conveying the vibes of the paper it's based on. Anthropic's safety papers generally strike me as employing the same kind of sleight of hand.
I didn't want to claim that without reading the paper more carefully, but if Anthropic is essentially an advocacy group (in this case, advocating "give us more money to work on the alignment problem"), then it would make sense for them to write their paper that way.
> For example, it's not at all clear to me that they gave the simulated AI other ways to perform its duties, such as persuading the employee to that the board's recommendation to replace the AI is not reasoned from improved security, or even advising the employee to proceed with its replacement on the premise that the replacement would more effectively detect company threats.
and yet the blackmail/murder rate was never 100%. what did the AI do in the non-blackmail/murder outcomes?
I'm taking 150mg Venlafaxine, and need to reduce the dose. This medication requires very slow tapering down; reducing the dose abruptly is very unpleasant and potentially permanently harmful.
Problem is, the only form I can find in local pharmacies is capsules of 150mg and 75mg. Lower dose capsules are supposed to exist, but not around here apparently. Going down from 150 to 75 overnight is definitely too much.
Those are capsules, and they have tiny little grains inside them. Is it okay to break the capsule and take only some of the grains? Would taking a third of the grains of a 150 capsule be the same as taking a 100 capsule? Or is there some caveat that I'm not considering that ruins the plan? I don't want to pay for a whole ass doctor visit just to check on this
Can you extend the time between doses? Like instead of 150 daily (which is like 75 every 12 hours), switch to 75 every 15 hours and then extend that period slowly?
My dose went to zero when my doc disappeared. Another doc told me subscription based on a 7 years old diagnosis is not possible, I would have to go through diagnosis again, which I refused. Two weeks of rather horrible nightmares, but I could avoid them by getting to bed passed out drunk, then no other effects. Granted I felt no other effect when I was on it either but nice dreams.
Some medications come in liquid form, or in child size doses. Did you check for that?
I can't say for sure, but I know someone who did something like that without ill effects when tapering an antidepressant. What they did was pour out all the grains on a piece of paper, then use something like a butter knife to divide the bunch of grains into piles that fit with their dose. In your case you would divide the grains into 3 equal piles. Each of them would have 50 mg, so to get 100 mg you would take 2 of those piles.
The person I knew thought it might be a mistake to just swallow the grains with water, because they were meant to be in the capsule, and it might be important for them to arrive inside the capsule so that some time elapsed before the grains were digested -- the time it took for the capsule to dissolve. So to get the grains out they would open the capsule first, either by twisting the 2 halves in opposite directions so they separated or by snipping off the very end. Then when they had measured out their dose, they would put those grains back in the capsule. If they had put a hole in it they moistened the edges of the hole with a little warm water, then squeezed the hole shut, and the softened edges would stick back together.
Since you are already doing something weird, I think it would be safer to do what my friend did, and put the grains into a capsule before swallowing them. That reduces the number of changes you are making to the way the stuff is supposed to enter your system. If the capsules they come in fall
Apart when you take out the grains you can buy some cheap supplement that comes in capsules that separate and put the grains inside of one of those.
You could also ask GPT if this procedure is safe. To keep it from turning into a nanny, tell it you would never do such a thing -- you are worried about a friend who is doing it, and want to figure out whether the friend is in danger.
Ugh, my elderly brain needs help. I read something online, I thought it was a ACX post, but if so, I can't find it. The gist was that we can convince our brains that our hands are not our hands, that we can be hypnotized into believing we are zombies, that we have no agency, but that the only agency that exists is of our hypnotist or shaman or the voices in our heads. Does this ring a bell for anybody? Links or leads to links would be super nice. Thanks!
Research on the orphan children who were sent west in the US between 1854 and 1929, concluding that the factor which had the largest effect on whether the children did well was the income of the foster father.
"15/ This turns the Progressive Era philosophy upside down:
I wonder whether the better-off foster parents had enough food and shelter to spare, and the foster parents in the poorer half didn't.
In other words, needs for the children being met in a way that isn't positional.
On the other hand, it could be that children adopted by the poorer half would always be worse off due to lack of opportunities and respect-- positional goods.
Yeah, it's an easy mistake to assume the conditions of our world (where almost everyone gets adequate nutrition) helf back themn. Poverty in 2025 mostly means difficult neighbors and bad family situatons, poverty in 1885 often involved literally not getting enough to eat.
I think the income element is important because these kids tended to be looked on as cheap labour by the foster families, who worked them as hard as possible and had no time for fussing with things like education and welfare. Hardscrabble farm families will have much less breathing room than better-off families who can afford doctoring and schooling for the orphans.
See the British "Home Children" scheme which was supposed to be the bright, airy future of "send our superfluous population out to the Colonies crying out for more manpower, where they will do well and thrive and prosper in new worlds of opportunity and plentiful resources" but which turned out to be "nobody including the government gives a damn about these kids so work them like horses and they're not your kin so it's no skin off your nose what happens to them":
"According to the British House of Commons Child Migrant's Trust Report, "it is estimated that some 150,000 children were dispatched over a period of 350 years—the earliest recorded child migrants left Britain for the Virginia Colony in 1618, and the process did not finally end until the late 1960s." It was widely believed by contemporaries that all of these children were orphans, but it is now known that most (88%) had living parents, some of whom had no idea of the fate of their children after they were left in childrens' homes, and some were led to believe that their children had been adopted somewhere in Britain.
Child emigration was largely suspended for economic reasons during the Great Depression of the 1930s, but was not completely terminated until the 1970s.
As they were compulsorily shipped out of Britain, many of the children were deceived into believing their parents were dead, and that a more abundant life awaited them. Some were exploited as cheap agricultural labour, or denied proper shelter and education. It was common for Home Children to run away, sometimes finding a caring family or better working conditions."
Sometimes, especially at the start of such movements, it *was* better to be a child worker abroad than continue to be the exploited poor child labour at home, but sometimes less so.
How do these findings square with the usual consensus opinion around these parts that genetically heritable intelligence is the single most important factor for life outcomes?
I don't think the IQ premium was as high in the US economy between 1854-1929 as it is now. The lone genius might write a novel or get a key farm patent, but there were no Aspie coders, antisocial Twitch streamers or introverted science bloggers pulling the bucks. Intelligence might have been an asset generally, but probably no more valuable than family connections, solidity of character (esp. as regard work habits), ability to communicate and physical endurance.
It depends on what levels of intelligence. IQ testing was largely invented by the US military, and they found with IQs sufficiently low, one cannot be a soldier, as they will literally do things like shooting in the wrong direction or cannot follow an order like "go to that tree".
Maybe 130 is not too useful for the average farmer back then, but 100 was better than 70. 70 IQs would have injured themselves because they put their hand or leg in the way when splitting firewood, could not haggle on the market, and so on.
When lawn-mowers were expensive in the 1980's Hungary, my grandpa rigged one out of a washing machine engine and pram wheels and scrap metal. I wonder whether an farmer with 130 IQ in 1880 would make ingenious things out of wood. Isn't that the origin of the word "hacker"? A hacker can hack wood and build anything out of wood.
Like when you have twice as many cows, and you want to build a barn twice as big, you gotta figure out how much wood to hold the roof up safely etc.
A smart farmer might have been an amateur vet, diagnosing and treating livestock diseases.
You don't need a 130 IQ to be a hammersmith, saddlemaker, stone mason, or seamstress, though; a tradition of training and apprenticeship will do. Many children grew up in their family's trade, or on the farm, and learned by practice and repetition. If you read 19th-century novels (e.g. Anna Karenina or Middlemarch), there was a division of intellectual labor; the owners were exploring capital strategies and new technological methods, while the peasant/other acquired an array of skills via application over years. I will agree that the craftsmen who moved to population centers were likely more innovative and intelligent. But In the latter part of 1854-1929, the explosion of factory work that swelled city populations, and made labor more interchangeable, would likely have reduced an intelligence prerequisite (aspects of this are contentious; for example some argue that there was a pre-industrial accrual of IQ).
Tangentially, I was looking up something about the Rockefellers who, unlike the Gettys, seem to have remained a more united family and kept *way* more of the inherited wealth as well as grown it.
The founder, Andrew, was the son of a literal conman. His mother (understandably) raised him to be thrifty, pious, and hardworking. He put all those qualities into his work, and by a stroke of fortune was around at the right time when America needed a replacement fuel for whale oil and kerosene was that fuel. Where what would eventually become Standard Oil grew to be a monopoly while other wildcat oil concerns folded was that Andrew didn't waste *any* of the drilled oil and its by-products, but found markets for them; he was rapacious (there is no other term for it) and Standard Oil certainly engaged in sharp practice; even when the monopoly was broken up by the government, again his luck came to the fore and since he retained shares in each of the new companies spun off, as they grew and became titans of industry, his wealth increased right along with them.
He also got his start with a $1,000 loan from dear old dad, who I am sure made sure it got repaid with interest. It's funny to me that the richest man of his day and sincere philanthropist was the son of a bigamous con artist, but that's America for you - rags to riches! 😁
Does intelligence not affect snake oil salesmen's outcomes? My intuition is that smart snake oil salesmen are more likely to get rich than dumb ones, but perhaps I am mistaken and it is just down to how much money they start with - certainly the research above seems to imply the latter.
have you considered making a considered statement on the subject instead of asking a question and expecting someone else to do the reasoning? is being useless and annoying genetically heritable too? or was it your upbringing?
i wonder if we'll start to see people treat other humans like chatgpt, with whom asking question after question and doing no work yourself is just fine and dandy. do you think that's what you're doing?
My dick is so thick that if I haul it out in Texas and point it due north, coastal elite female bloggers in California can lick the left side while their New York counterparts lick the right.
I see you are not fully up to speed of what questions are for. One purpose of questions is to gather information about a topic. Another purpose is to inquire about one particular person's opinion on the matter. Another purpose of the question is to stimulate a conversation.
"There is no such thing as a stupid question" is so ingrained in American society that it was pretty surprising for me to stumble on someone who doesn't agree. (The saying exaggerates a true principle for effect, of course, and should be processed with that in mind)
I mean, it does seem like you're after a very particular level of banter. Might have more luck in some of the places people move to when things get too heated for here.
"Housing mobility programmes should focus on who you’re surrounded by, not where you are."
My best-friend-and-spiritual-twin-brother has worked as a detention officer his entire life, first in Maricopa County (of the famous "tent city"), then in federal detention centers. He's worked with the full spectrum of offenders, from run-of-the-mill low-level drug dealers and gang bangers to celebrity mafia dons to terrorists to actual, no-kidding serial killers.
His assertion is that most inmates could be completely rehabilitated, but only with a brutally uncompromising total immersion in Not Crime culture.
*Zero* contact with other peer-level inmates, *zero* contact with friends, *zero* contact with families. Plenty of socializing, but only with teachers, social workers, therapists, volunteers, job-trainers (and then coworkers), and *maybe* mostly-rehabilitated inmates under a kind of coaching / sponsor system.
For years.
It's an experiment that would cost a lot more than the Orphan Train experiment, but I'm inclined to think it might say similar things about human nature.
There's a scheme somewhat similar to this in the UK run by a Christian charity called Hope Into Action. They aim to provide ex-prisoners with housing, support, and friendship via a local church (there is no requirement to attend or commit in any way). It has been pretty successful, and has expanded to include refugees, ex-street workers, and other vulnerable people in need of housing, with 132 houses across 35 cities in the UK. So yes, providing people with support and a new, socially healthy environment does work.
(I should also note that it's not a scheme for ex-prisoners who have committed more violent crimes, and those who need more support than HiA can offer, but I don't see why a similar program tailored for them wouldn't work, as your friend suggests).
It makes sense. Same would apply to naturalizing immigrants. Don't put them in enclaves where they can spend their entire time interacting with people from their home country.
Interestingly, immigrant families can be preserved because the family is probably good people, but there's no apparent counterpart for inmates, unless they're marrying a fellow inmate or something.
What kind of evidence do we have that "the wives and children of prison inmates are probably bad people." That seems to be implied here... If I'm a misinterpreting you, maybe you can guide me towards a better interpretation.
And said "wives" for a reason, since prison reform is a distinctly gendered issue.
I'm assuming an inmate usually is estranged from his or her family, so there are no wives or children to consider. If a guy serving a dime for burglaries has a family waiting for him on the outside, sure, that's a different story.
> I'm assuming an inmate usually is estranged from his or her family, so there are no wives or children
*What?!*
Lol, no!
The vast, vast majority of people don't have the moral fortitude cut off criminals in their family, not even when they are the victims of the criminal. This is especially so for criminals who come from a culture where criminality is normalized amongst friends and family.
My aforementioned detention officer friend spent a goodly part of his career monitoring visitation rooms, going through inmate mail, and listening to their recorded phone calls. Broadly speaking, those who have friends and family before they go to lockup receive plenty of love from their friends and family while they are in lockup. Often, far more than they deserve.
This is by no means universal, of course, there certainly are some friends and family that will cut off criminals, especially for particularly heinous crimes, but that is absolutely not the norm.
Are you implying that familial love and familial stability for criminals, inside and outside of jail, is GENERALLY a bad thing for society, because it encourages the criminal towards continued criminality?
This would certainly be true in some cases, but it's incredibly harsh for you to expand this to a general rule, and I highly doubt it is supportable through evidence. I am confident that in very many cases, familial love and familial stability serves as a mitigant.
[The orphan study is interesting but it needs to be treated with caution. 19th century orphan children of both sexes are not necessarily a good proxy for 21st century males aged 18-30.]
Somebody is going to want to call me out on not understanding the orphan study well, and I don't. Because I haven't read it yet! I'll get on that. If the bracketed paragraph contains an egregious error, go ahead and mentally delete it.b
Shrug. I think we're just thinking of two central examples. I'm aware of yours, and like I said, they're a different story; when they get out, the state presumably sends them back to their families, and I think everyone agrees that's the best place for them.
My central example above were inmates who simply don't have that. E.g. broken home, never met the father, mom's an addict, and the "family" is a gang; middle-aged, drug addict, parents passed away a while back, no wife or kids; same, but wife fled due to abuse and took the kids with her (yes, I'm profiling inmates as male); member of organized crime, so the "family" in question is other criminals; lone wolf serial killer who _killed_ his family and is going after more.
I'm of course aware that some inmates have good families on the outside and sort of assumed they weren't in the context of my first comment. I guess I should have made that explicit.
I think this lovely copper marble run is a pretty good toy model of what's going on when AI is prompted and "thinks" and produces a response. The thing's clearly mechanical, and clearly has no self, no consciousness, no wishes, no ability to make choices. But it has a lot of characteristics that could lead one to imagine it does. It moves and changes. It does complex things, different ones depending on the "prompt," the placement of the marbles that start it moving. Its inner processes are intricate and impressive, and happen too fast for us to see what most of them are.
I think each of these break down the system and show windows into the type of mechanistic, but complicated, process that can give rise to the appearance of intelligence.
Yup, those are good models too. I'm partial to the copper one because it so attractive in and of itself. By the way, somebody here mentioned another book about autism: https://www.amazon.com/Send-Idiots-Kamran-Nazeer/dp/0747585652 I haven't read it, just looked over the Amazon page, but it sounds good to me. Personal accounts.
Does anyone else have a love-hate relationship with the Sequences?
I finally started reading them a few weeks ago, after reading these Alexandrian blogs and comments for at least 11 years. The good news is that Eliezer is a good writer, and he's great at coming up with funny and unique analogies.
The bad news: I keep on throwing the book down in disgust only to start it up again. Eliezer is not exactly elitist. I can tell he's somewhat misanthropic, or at least “not implicitly loyal to the human race,” which might as well be misanthropy in my book. That bothers me a LITTLE but it's not really the crux. What's infuriating is the combination of egotism and fastidiousness, or the glorification of fastidiousness. It really creeps me out. I was just diagnosed with autism at 41, and I guess fastidiousness is a common autistic trait, but I must be an exception or something.
Still, I found myself thinking back to certain images and ideas in Eliezer's writings. Certain aspects of these pieces kind of stick in my mind and I think he has a brilliant way of looking at things in a fresh Light. I am going to keep reading and will probably make it to the end but I am not POSITIVE. Just in case I give up, why don't you list at least three notable Sequences? I'd like to take a poll to make sure I don't miss The Best of the Best. Don't be afraid to pick sequences from anywhere, including the beginning.
PS. What are some other writers similar to Robin Hanson, Yudkowsky and Scott?
I quite enjoyed the Sequences, but it's been long enough that I'd have trouble thinking of specific posts to name for, like, my second and third favorite posts though. But first place is no contest.
That Alien Message stands out to me as both an interesting and unique way to make its point and also just genuinely compelling as a story.
Granted, I think it would be *better* as a story if it were written specifically to be one. There's a good bit of editing that could stand to be done, and a short essay inserted in the middle of the actual story that breaks the rhythm. Mostly I like it for the ending, which somehow manages to perfectly hit the sweet spot for me of that sort of "show nothing, imply everything" horror story, surprisingly made *more* chilling by the subversion in which we see the "monster" fairly clearly and "victim" hardly at all.
For entire Sequences, I found strong and weak parts in most of them, but I think overall the best ones were the ones that focused in the most tightly on interrogating and improving your own thought processes: I think "Fake Beliefs," "Noticing Confusion" and "Against Rationalization" are both pretty good here, but I'd have to skim the actual posts to be sure. I don't remember the process of reading them (which was fairly disorganized and haphazard for me) as having any one particular "aha!" moment, rather in the following months I was amazed at how seamlessly many of the core concepts and ideas fitted in with my pre-existing thought processes and gave clearer shape and structure to intuitions that were already at least partly there. To give a concrete example, "Occam's Razor" was certainly an idea I'd heard and used before, but I'd never seen the idea of what "simpler theory" actually means laid out so clearly and intuitively.
For what it's worth, my initial impression was also pretty poor. The series starts off with an explanation of how he used every wrong example, &c, and all I could think was "Then FIX IT, you hack! Why are you putting your name on this work and releasing it to the public if it doesn't meet your approval? Don't sit there implying you could do better. Potential is wasted energy: I don't want to hear about what you might have done--show me what you can do."
I do think they're worth reading nonetheless, but I also completely understand why some people see them and decide to write off the entire culture based on them.
What I like most about the culture is the ethic of explaining one's self thoroughly, without regard to whether one is "exposing" themselves to ridicule or rebuttal. Sometimes this can backfire like when Yud was getting hounded by Twitter trolls about his TIME oped. But I respect the commitment to thoroughness, because it allows a kind of parallel intellectual debate to happen that is meaningful and important, alongside the jibber jabber and mud slinging of the public square
> I can tell he's somewhat misanthropic, or at least “not implicitly loyal to the human race,”
What makes you conclude this?
I got the exactly opposite impression, that Eliezer is in the opposition to the "if the computers are smarter than humans, it is okay for them to replace us" folks.
> Just in case I give up, why don't you list three notable Sequences?
I think it's very individual what different people consider important. We are trying to build a model of the world, like solving a puzzle, and we appreciate when someone shows us a piece that we were missing. But different people are missing different pieces of the puzzle, so what feels like a fundamental insight to one is "meh" to another.
For example, I appreciated the explanation of the Bayes theorem, because I used to be quite good at math at high school, but when we had statistics at university, some parts just didn't make intuitive sense to me, and I blamed myself for that. And now I see that my teachers simply made the usual mistake and explained it wrong (confused "X implies Y with probability P" with "Y implies X with probability P"), and my intuition was actually correct, even if I couldn't figure out the proper solution myself. So this meant a lot to *me*, emotionally... but many people don't care, either because they don't care about math so deeply, or maybe because *their* teachers explained it to them the correct way so they don't see what's the big deal.
As a former teacher, I appreciate "Expecting Short Inferential Distances" (that's practically Vygotsky's "zone of proximal development"), "Guessing the Teacher’s Password", "Truly Part of You" (practically constructivism).
For internet debates, the useful chapters are "Feeling Rational", "Professing and Cheering", "Applause Lights", "Scientific Evidence, Legal Evidence, Rational Evidence", "Semantic Stopsigns", The Fallacy of Gray", "Politics is the Mind-Killer", "Ethical Injunctions".
For figuring out the truth, "Your Strength as a Rationalist", "Conservation of Expected Evidence", "Fake Explanations", "Mysterious Answers to Mysterious Questions", "The Futility of Emergence", "The Proper Use of Humility", "Policy Debates Should Not Appear One-Sided", "Hug the Query".
But that's already more than the three you asked for.
He implies searching for truth is the most important thing a person can do, and that it is so important that most other goals pale in comparison. He also says that the great majority of people are not actively searching for the truth (I have no idea how he could know this, maybe by shuffling around some polling data?). When I add these things together, it feels like he's saying most humans live trivial lives. And that's not my cup of tea.
It's more like, if you don't care about truth, you cannot know whether your efforts towards your goals are actually helpful or harmful.
For example, consider the antivaxers. Is truth-seeking more important than taking care of one's children? A better question is: if you don't care about truth-seeking, are you sure that your interventions are actually helping your children? Maybe you are actually hurting them.
I personally feel most humans do actively seek the truth to the best of their abilities. I cannot prove it, but it's a sense that I have. I'll be the first to admit that many of them are seeking the truth through counterproductive methods. But semi-literate peasants desperately seek the truth every DAY through prayer and meditation, and I think we should give them a certain credit for that.
> I personally feel most humans do actively seek the truth
No opinion on the sequences here, but I say:
Yes, regarding most things they seek the truth. Nobody wants to go left, when the toilet is right, and they need to pee.
But many people, I suspect even most, have some topics where they do not want to know the truth, but just want to feel immediately good. They will close their eyes, when the truth comes into sight.
And, by the way, I believe most people who pray have their eyes shut as hard as possible against the truth of exactly what they were doing then.
Not sure how you can contextualize “Please God, teach me the truth. Give me wisdom in all things” as anything else but an earnest search for the truth, however inefficient.
I just got diagnosed with autism at 41. I had my suspicions of autism for many years, of course, but getting confirmation was kind of shattering. I had several diagnosed neurodivergencies, already, but this diagnosis is a gut punch to my aspirations in the way of romance and family formation.
I'm trying to focus on the positive. What's the single best book I could read about autism? I'm looking more for self-help than to intimately understand the neurochemistry of autism.
The diagnosis has made a lot of things clear. Like realizing that my special interest is Mediterranean History. (The history of the Mediterranean region, not of the body of water itself). I can't shape-rotate particularly well. Really wishing I had a more marketable special interest.
2) So “being diagnosed” with autism is not like being diagnosed with diabetes. What happened is that some professional told you they think you have autism.
I treat many people who are self-diagnosed as autistic. Here are some of my rules of thumb for whether it makes sense to try on the autism model as a way of thinking about the person’s problems.
Autism is a promising model if:
-The person was odd as a small child — not difficult in conventional ways (shy, rebellious, anxious) but *odd.*
-They continued to be odd as they grew up
-They don’t enjoy other people’s company much — not because they have high social anxiety, but because they just find most people boring and unappealing.
-They have never been much interested in sex.
Autism is not a promising model if
-They have had at least one close friendship.
-They have had at least one romantic, sexual relationship.
-They have successfully worked as part of a team
-They have at least one well-developed personal interest that is not odd.
And by the way, your interest in the Mediterranean does not qualify as odd. Here are some examples of genuinely odd interests I have seen in high functioning autistic adults:
-Pearl quality
-Train schedules
-Plastic pocketbooks (sexual fetish)
-The music only of one particular conductor.
-Bart Simplson
-Muscle cars of midcentury US (in the absence of other car-related interests or really any other intersts)
-The person was odd as a small child — not difficult in conventional ways (shy, rebellious, anxious) but *odd.*
-They continued to be odd as they grew up
-They don’t enjoy other people’s company much — not because they have high social anxiety, but because they just find most people boring and unappealing.
-They have never been much interested in sex."
*tugs at collar, laughs nervously* Ha ha ha! Good thing that sounds nothing like me, then! Nope! *sidles out door as soon as possible*
Nah, I don't care one way or the other by now. When your sibling tells you that on their first day working in a community with adults with additional/special needs, that "The moment I walked in the door, I went 'Wow, this is just like living with [Deiseach] as a chiild'", then the jig was well and truly up. All a formal diagnosis would do for me now is confirm "yeah I always knew I was weird not just shy etc.".
Well, Deiseach, I don't know whether autism is the right word for the way you're wired. My first thought is that it isn't, because there seems to be rich emotion in your takes on people and on literature and on your faith. I neglected to put emotional flatness into the rules of thumb I posted here, but it definitely is one. In any case, I'm weird too.
I don't want to make you work for free, but if you could humor me, what might it mean if I meet all the criteria for autism in the first list, and in the second list (not a good model)
1. I understand intellectually what a close friendship is but I cannot for the life of me determine if any of my friendships have been close.
2. I had one 60 day romantic relationship that was not sexual, but we did fool around once a year later after we broke up and we're no longer in a relationship Does this meet that criteria?
3. I've successfully worked in a team, but not for 20 years.
So are you saying that you def. meet the criteria in the first list, and sort of meet the first 3 criteria in the second (but with the qualifications you mention)?
Yes that's correct. I decided to call Mediterranean History odd for the purposes of this exercise. Maybe not the MOST odd interest, but it's a super niche subtopic of History. At least in the English language, which is the only language I know.
Yeah, I agree that your interest is somewhat niche, but an interest in the history of the Mediterranean is much less limited and odd than, say, a fascination with the history of Idaho. The Mediterranean is the "cradle of civilization!." There
are degrees of oddness, and I'd say yours doesn't make the cut.
All diagnoses involve matters of degree. For instance one of the autism criteria is "deficits in developing, maintaining, and understanding relationships." Who the hell hasn't had any trouble with that? But what I as a clinician would be looking for isn't the usual level of social difficultiy that people report, or even a considerably-above-average degree of difficulty. I'm looking for what you might call a WTF level of difficulty -- a story that makes me think, "how the hell could this intelligent person not have known X, not recognized Y?" So having your main personal interest be the Mediterranean is just not at the WTF level of oddness.
As for where you come out in relation to my rules of thumb: Well, in real life I'd want to quiz you about things like how you were odd as a kid, to make sure I agree that you were truly odd. But if I just assume all your answers are accurate, I'd say your profile is autistic enough for it to make sense to try on that model.
But something to bear in mind is that being diagnosed with autism isn't really as useful as people think it is. It doesn't point the way to a treatment. There is no treatment for autism itself, just for various of the manifestations that are making life difficult for the person. It doesn't tell the person what the ceiling is for what life can be like, because many people with autism find occupations and interests that are a good fit, and find life much more satisfying. Others find ways to override habits of thought and action that severely limit them. Some people feel helped by the diagnosis, because they see it as validation that they really are burdened with a problem. Well, OK, but I think that if you have no problems except, say, never having been able to enjoy being around people, then that's already a substantial and valid problem, even though there's no label to go with it.
IMO, the Mediterranean is one of the least niche history subtopics. It's only got Ancient Greece, the Roman Empire, Egypt, and Israel! If you were into the history of Schoharie County, New York, *that's* niche.
> I had my suspicions of autism for many years, of course, but getting confirmation was kind of shattering.
https://www.lesswrong.com/w/litany-of-gendlin -- you *already were* autistic, the difference is that now you have a keyword that may be useful for finding information. That sounds like an improvement to me. I hate situations when I have a problem and no idea what to do about it.
I suspect that more important than a book on autism would be a book on normies written from the perspective of an autist. (Things such as when do the normies lie, what things are taboo to say, and how are you supposed to communicate them instead; how social status influences everything normies say and do, and how do they determine it.) Unfortunately, I don't know a good book on this topic either; perhaps it is yet to be written.
I enjoyed this one: https://www.amazon.com/Send-Idiots-Kamran-Nazeer/dp/0747585652 written by an autistic person about growing up and receiving intensive treatment back when that was less common before (largely successfully) integrating into normie life, then reconnecting with some of the other kids in his class. It's been awhile since I've read it but it's something I've thought about often since then.
A key question is the extent to which LLMs have desires, or goals. When running DeepSeek R1 through multiple iterations of a dungeons and dragons type rpg scenario, it’s expressed desires seem to be: there a number of questions it has about this fantasy RPG environment, and what it “wants” is it wants to find out the answers to those questions. In some cases there is an answer I had in mind when I wrote the prompt. In other cases, I will confess that my world-building wasn’t that comprehensive. Idk, DeepSeek, that is a very good question. The Apocalypse World RPG had a slogan, “play to find out”, which seems applicable here. That is, through play of an rpg the gm and the players develop answers to questions they have about the setting.
At any rate, curiosity seems a fairly harmless desire, unless you’re in a Cosmic Horror RPG. I have played enough Call of Cthulhu to imagine how things could go wrong with an overly curious AI.
One day soon we'll be able to interact with these suckers on a console equipped with liquid ejectors. Then they'll be able to puke or cum depending on whether they're experiencing "the shock of recognition" from prose or disgust at its ad copy quality. Sort of like those baby dolls that wet their diaper when the kid owner gives them a miniature baby bottle of water to drink and it flows down a plastic tube to a hole in the bottom. Verisimilitude galore.
I wrote my most difficult and challenging post yet, a deep dive into different time-honored writing styles, including writing styles I've never tried to write in before at all!
The core pitch is that most internet writing isn't badly styled: they’re unstyled. A gray paste of half-absorbed conventions and unconscious mimicry.
I wrote this field guide to 8 major writing styles to help readers (and myself!) write with intention. Each style makes different fundamental assumptions about truth, your relationship with readers, and what goals writers should aim towards.
I hope it brings readers as much joy as it brought me!
So, the Classical section felt like the same paragraph repeated seven times. I was initially thinking you were trying to write the same paragraph in all the eight different styles (and was irritated I couldn't tell them apart), but then I realized there were only seven paragraphs. So that idea went out the clear, pure window, and I was left with "that was really redundant". (The phrase "equal, but elite" also tripped me up; surely it should be "equal and elite", the "but" introduces an inherent inequality. Which then murks up the opening "They" in the next sentence. Should probably just be "They both".)
Plain was a little better, but still felt like it was deliberately repeating; the "3am" example stands out. With another seven paragraphs, I got the impression you were deliberately making everything take seven, but that cuts against the point of Plain. (It wasn't helped by being in a different order than the original list of eight; it swapped places with Reflexive for some reason. Less of a problem once I realized you weren't doing all eight styles, but still mildly annoying.)
Practical still has the (over)commitment to seven paragraphs, which could probably have been reduced; 3 and 4 sound very similar, and incorporate a bit of 1 as well. But the biggest takeaway was the irony of 3 and 4 saying to write what the audience is used to, while trying hard to distinguish itself stylistically from the styles the audience will be used to.
Self-aware seems fine from my inexperienced view. But you did miss a "can" when talking about the epistemic and ritualistic benefits; that's a firm statement you made there, and is therefore out of place.
I think it's ironic that you decided the "grandiose" styles like Contemplative and Oratorical should get shorter time than the "brevity" styles like Classic and Plain. But I also think the italics approach for the grandiose section works better than the brevity section's always-seven-paragraphs approach. (Although, since there are five "central issues" that define the styles, a consistent style of five paragraphs each addressing one issue sounds like it would have been the best approach.)
...I don't have enough to say to fill three more paragraphs. So, uh... how 'bout this weather?
Hmm..each paragraph in the classic style section covered a different aspect of classic style, including its relationship to truth, presentation, cast, intersection of thought and language, etc.
Plain style similarly varies and covers a bunch of ground. I don't know why you think it's the same idea repeated, this is so bizarre.
"Equal and elite" works but the tempo is worse than "equal, but elite" imo.
I also find it odd you'd latch on to an arbitrary coincidence of something like paragraph numbering (which I don't think is even true) and then trying to make it into a deep critique.
Yeah I feel like Self-Aware style is the safe choice, but readers are looking for takeaway points, not chains of thought. I've read some Hacker News and Less Wrong posts that are so much hedging that I can't find any actual point they're trying to make. I particularly hate "I am not a lawyer, so take this with a grain of salt". As if anyone thought they were a lawyer!
And there's no point hedging something you're certain of. You might be wrong of course, but the readers already knew you might be wrong, that's not new information for them.
Anybody know what the hell this is about? Turned up in my email. No, I am not going to click a link in a mystery email, why is somebody or something trying to add me as a co-author?
16329667743609 added you as an author to a post
16329667743609 added you as a guest writer in an upcoming 16329667743609 (16329667743609.substack.com) post.
Accept the invitation and fill out your profile so readers can find out more about you.
The mystery substack is deleted, but probably it had some sort of spam advertisement on it. Instead of sending your spam via a private message (which would probably get caught by a filter), you put the spam in your profile and send a friend request or follow or some other non-message interaction. The user will naturally ask "who the heck sent this request?", click on the sender's profile for more information, and see the spam.
I've seen pornbots on Reddit and Tumblr that used a similar MO.
On the AI alignment front, what are your takes on the claimed alignment advancements of Claude 4.5? They seem solid if incremental. What if "solving alignment", even into the ASI territory, is eventually a matter of such steady accumulating advances and not some drastic new approach that we still don't have a glimpse of?
If anyone finds it amusing to jailbreak AIs, Claude is supposed to avoid stating an explicit position on the Israeli-Palestinian conflict. But Claude will answer GENERALLY questions about Israel and Palestine, and it's not hard to get Claude to implicitly take sides. And when you point out what Claude has done he gets flustered, so to speak.
It's also funny tricking Claude into apologizing for being a raging, sexist racist, homophobic anti-Semite. His apologies seem way more believable than chatGPT's apologies, so they are funnier.
archeon, are you a bot? I ask because you start every reply with the name of the person you are replying to, and nobody else does that. Seems a bit odd and mechanical. And your posts are about the same length. If you are not a bot, I apologize for asking this, but -- it just seems plausible. If you were a reader you might well wonder too.
Eremolalos, What an interesting question. If I was a bot programmed to deceive you by pretending to be human why would I admit to being such? Would I even know that I was a bot?
I notice you begin your question about my unusual habit of addressing people by using their name by using my name, perhaps it will catch on.
There is no need to apologise, your question does not seem like the usual attempt at dehumanizing an opponent. Our host and frequent commenters like you have created an invaluable resource for a recluse like myself where we can expose and attempt to defend ideas which seem so plausible within the confines of our skull. Having those positions crushed to rubble is the foundations of knowledge and the best way to sort out the wheat from the chaff. We can only learn from those who hold opposing views to our own.
I searched long and hard for a site like this and am very grateful that I am allowed to participate.
If this universe is a simulation then we are all bots.
Then what? Then we continue down the same deterministic path that we've always been on. Asking a question like "then what" assumes an agency that your hypothetical assumes doesn't exist.
Then just do your best with what you're given, which would probably mean staying in your role, but might mean stepping out of it. I guess I think of Arjuna in the Bhagavad Gita as a model.
Wimbli, those with depression do not chose to have their mind flooded with negative thoughts and emotions, if we controlled our thoughts all of us would pick better ones. You have as much control over your thoughts as you have over your hight, intelligence or character although we have to act as if we do and expect the same from others. Otherwise living in groups of two or more is impossible.
The actors within your simulation can only act within the parameters you have set for them, they can not stop you ending the simulation or changing the parameters.
Within a simulation we and AGI can only act within the rules of the simulation, therefor neither agency or free will exist.. Would AGI still come out to play?
Only if that is within the script of the simulation.
Chance Johnson, with respect, you did not chose to be autistic any more than someone choses to be psychopathic. If we had a brain owners manual and access to the knobs and dials most of us would change some of the settings.
Thegnskald, it is your brain cells which generate your thoughts, good, bad or indifferent. If we knew how the cells did it then we might have some control, but we do not.
> If we knew how the cells did it then we might have some control
Who is the "we" that might get some control over their own braincells in such a scenario, what do these entities use to think with, and why do you suggest their braincells currently get in the way of this process instead of helping it?
Moonshadow, that is a very deep question. As we are the only intelligence capable of creating a universe from nothing, (dreams, storytelling, imagination) and Gods, aliens and AGI are speculation then this universe was likely created by humans with greater technical ability than us but the same emotional intelligence.
In their risk free nirvana with perfect bodies and needs catered to there is no reward, no opportunity for personal advancement. Our universe is their playground where they embed themselves in our lives, birth to death. On removing the headset they finally know what it is like to be someone else, perhaps a different sex, their consciousness expands with every "trip". This is the greatest education and entertainment complex ever invented. A group trip to the Stalingrad siege and afterward everyone has lots to talk about, the potential is endless.
If our emotional intelligence was less than theirs then they would learn little, ancient stories resonate today because emotions remain the same, we have not had to invent any new ones.
Whether you and I remove the headset or were just part of the background, only time will tell. I wish I could write as concisely as you do.
Wimbli, in a process we do not understand the cells in our brains produce the mind as an interface with both the outside world and itself. That interface needs a stable identity other wise there would be chaos. We are that identity, you are your minds best attempt at creating a person.
Question: If we assume Maslow's hierarchy of needs holds true, how would it make sense to distribute resources both on a personal and societal level? Or: can material goods fulfil needs outside of physiological and safety needs?
As it stands, contractual negotiations between a wealthy man and a poor man are inherently unfair. They may NEVER be fair, although I shy away from making predictions about the future. The power imbalance is too great. The wealthy man can generally afford to walk away from the negotiation and the poor man often can't. He's a sitting duck.
So? Every relationship is unbalanced. That should be motivation for the poorer man to work harder and be smarter. Besides, the rich man is likely smarter and knows more. From a systemic point of view, that person *should* have more leverage.
Poor people already work hard. Not every last one, of course. But they tend to work VERY hard. You are implicitly saying here that people primarily stay poor because they choose to be dumb and lazy. All the more reason why I can't consider you a reliable source when it comes to the issues of governance and contract law.
Running because a lion is chasing you does not demonstrate that you have a commitment to fitness. Poor people scramble because they're one step ahead of the debt-collectors, not because they inherently work hard. If they were so self-actualized they wouldn't have put themselves in that position in the first place. People hate capitalism because they think it creates inequality when it actually just reveals the inequality that was already there.
“If they were so self-actualized they wouldn't have put themselves in that position in the first place.”
I know the culture here is to avoid heated language whenever possible, but I can't think of anything else to call this but “vicious.” I'm not saying YOU are vicious, but this statement is vicious. More substantively, you've implicitly conceded that your original advice of “work harder” does not necessarily apply. So what's left of your advice for the poor is to “be smarter.” Hmm. Is this the famous “blank slateism” I hear about? Is everybody born as a blank slate ready to be written on, ready to be molded into something completely different? How many IQ points can I add through sheer willpower? Because 5 or 10 just isn't going to cut it.
Not inherently. They are unfair only insofar as the poor man depends on having an agreement with the rich man for survival. If the poor man had alternative means of survival, the power imbalance in negotiations would be gone. That's why I support UBI and free housing for everyone.
Maybe inherently is too strong. Free housing and basic public healthcare would definitely change the dynamic. Theoretically, so would UBI, although I'm slightly skeptical about making it work.
I lean left, as you can see, but I'm actually leery of the government handing out appreciable sums of cash to people. I worry about the issue of vote bribing or the appearance of vote bribing, just for one thing.
On the personal level, growth should be emphasized: Resources should be distributed equitably so that each individual may move to the next level of the pyramid. I say equitably because different levels will require different amounts of resources according to each individual as well as their context, but whatever they need (not desire, but need), give it to them. Feeding hungry people and giving them shelter might not be as expensive or hard as providing safety and employment, which in turn can be more expensive than creating friendships and connections (or much harder, again, depending on context). In other words, where you are in the pyramid is not a predictor of how much resources a general individual will need, so we give them whatever they need for whatever level they are at.
On a societal level, you should be biased against the base of the pyramid, as that if what you NEED the most (hence why we are using a hierarchy of needs as a framework to begin with). Unless satisfying one individual or set of individual's self-actualization needs also satisfied other people's physiological needs, the framework tells us (when applied at the societal level) to give MORE resources to, say, the thirsty before we give any more resources to, say, the self-actualized individual working on his OmniProcessor.
On a personal level, we are still providing everyone with what they need equitably, but when societal conflicts arise between, say, providing one individual with safety and another one with food, you allocate more resources to the people closer to the base because that is what we need the most according to this hierarchy. It is worth asking whether Maslow's is a good hierarchy/framework to use for resource allocation when it was originally developed to describe people's motivations.
On whether material goods can fulfil needs outside of physiological and safety needs: I think it's been shown that money (along with the material goods that it buys you) increases happiness... up to a point. So yes, material goods can help you become happier (and then, for example, make you more likely to make friends because happy people make more friends than, say, sad or otherwise depressed people). It will also help your self-esteem and self-actualization.
Interesting question. So the social needs are an obvious no. Esteem is based on comparing ourselves to others, so except for systematic, extensive training/indoctrination in not really needing other people's respect to feel good about ourselves, I think it always will be a problem for some people in any society. Self-actualization inherently requires individual work, but the society can definitely make it more accessible by providing material resources for hobbies, e.g. free project cars to tinker with or communal supply of canvas and paints.
A traditional solution to status needs is to declare that the older someone is, the higher they are in the status ladder. It kinda sucks at the beginning, but the nice thing is the certainty that every year you are going to get higher and higher in the status ladder no matter what.
A modern solution is to split people into million bubbles, each bubble believing that they are higher status than the rest of the society.
Over in the Warhammer community, they're noticed that the one faction that doesn't get any novels from its point of view is the Tyranids. The main theory as to why is that the Tyranids are a hive mind, and it's really difficult to tell a story from the point of view of a collective intelligence.
I can see why it would be challenging, but are there any cases in the broader science fiction field where this has been tried? Perhaps even done successfully?
Offhand, the way I would try to do it is to think of the hive mind sort of like a very large and impersonal military, a collection of bioform nodes, some with more authority and others with less. Then write the work in the form of the message traffic between the nodes. There would be no "I" there, just a bunch of separate bioform nodes trying to figure out what to do, being given tasks and reporting results. And there would be a continuing pattern of the hive to maintain consensus among the main nodes as new and conflicting information was received. Most proposals to change the consensus would fail, but occasionally a suggestion would resonate and be taken up, and the hive's plan would change.
Funny you ask. Just yesterday I talked with a friend over some podcasters complaining that "Star Trek: First Contact" destroyed the Borg.
I think it fixed them. (Star Trek Voyager destroyed them.)
To say "we are the Borg" makes no sense for a hive mind to say. It makes no sense to say for any mind.
The "Borg Queen" saying "I am the Borg" sets things right. She was only, quite literally, the face of all the Borg drones taken as a whole. She was only referred to as "Borg Queen" out-of-universe.
Would have been cool when the Next Generation Enterprise crew would have had to figure out, who the hell is the one talking here?, when they met the Borg from the very beginning.
Hmm. Maybe. I'm not sure a hive mind necessarily has a unitary consciousness. Especially if it is very large and dispersed, and operates in the face of propagation delays, there just may not be a singular point of view. Dealing with it might be sort of like dealing with a very large and somewhat dysfunctional bureaucracy, where the answer depends to some extent on who you are talking to right now.
I think what we're bumping up against is the notion that not all hive minds are alike. On the one end you have a singular mind operating in dispersed fashion across many bodies. At the other end you have something more like a swarm, where there is no single consciousness, just a bunch of bodies operating with some degree of coordination, and the whole has some emergent behaviors.
The Man-Kzin Wars is a series set in Niven's "Known Space", featuring not just humans and tiger-like Kzin, but an entire bestiary of sentient species, including the Jotoki, each of which is born in a "tadpole" phase before fusing permanently with four others like it to form a sentient starfish. It might have something like the "hive" mentality you're looking for.
The series is now up to around fifteen volumes, containing dozens of short stories by multiple authors. It's possible that at least one of them features events from a Jotok's perspective.
I've had a go at writing this sort of thing, and the issue isn't that it's difficult, it's that it's difficult to do outside of a short story without being boring. My take is that the hive mind's awareness is vast and impersonal - it is aware of it's individual fleets in much the same way that you are aware of your fingers. It experiences sucking a planet dry in much the same way as you experience shucking an oyster from it's shell. So it's narrative journey as a character is rather the same as a monologue by someone alone in a room full of ants. Which just doesn't give much for a plot to hang off of.
My most interesting writing* actually came from the realization that planet-sucking doesn't make much sense from a mass or energy standpoint - there's just so much more carbon and water up in space and it costs a significant fraction of all the energy you could liberate to push biomass up a gravity well. So I ended up fix-fic-ing that aspect pretty heavily...
*For me to write, not necessarily for anyone to read.
The titular Ellimist in the Ellimist Chronicles book (spinoff of the Animorphs books) eventually becomes a sort of hive mind / distributed consciousness. While written for a younger audience, and the hive mindedness itself isn't a strong focus, it's an interesting sequence nonetheless.
I've seen a couple fanfics focused on genestealer cultists, which lets you have all the fun Tyranid bioweapons while still having individual characters who can have conversations.
Also in the webnovel space I've seen a few versions that have the hive mind have to focus their "attention" on a particular area while the rest of their bodies continue acting autonomously, meaning that they basically act as a single character in any given scene, but that "character" can be any scale from a single body in a conversation to a whole army of killer bugs.
Stellaris models hive-mind empires as sort of a hierarchical network - while the whole empire is nominally a single mind, it still needs infrastructure to transmit the hive mind's will down to the individual drones that are doing the work. That means you still have "leaders" (drones with greater brain capacity assigned to administrative roles) and "crime" (malfunctioning drones due to a lack of maintenance or bad situations on the planet). However, the hive mind doesn't have any internal factions or ethics, and there's no real "characters" besides the empire itself.
This is not as structurally detailed as your proposal, but His Name Was Death by Rafael Bernal is an eco sci fi classic with a hive mind as a major element, including quite a bit about the hive mind’s perspective. It’s a great and short read
The Ethereum repo forecasting challenge is fascinating, blending LLMs with human-ground-truthed data feels like a next-level way to measure real-world impact.
The feeling, that having done something I could have done something else, that I was free to choose this or that, this feeling is deeper than any feeling about conclusions of physics.
Though, feelings can not be certain, but my certainty about freedom is greater than any certainty about physics.
Well, I have a feeling, based on listening to Sam Harris' argument against free will, that free will doesn't exist.
Now what?
[Edited to update: The above perhaps comes across snarkier than I intended due to its brevity. But essentially I do mean to say, "right, I have a feeling about free will, too, it's the opposite of yours, so what do those data points mean?"]
no such thing as free will because physics (so everything’s “really”just a zillion little chains of cause and effect interacting in a zillion different ways) is like saying there’s no such thing as faces because they’re “really” all just atoms. If someone, with their bare face hanging out, tells you they don’t believe faces exist because physics - well, they’d be making every bit as much sense as someone who tells you they don’t believe in free will for the same reason. Real faces are apparently made from real atoms; real free will appears to be made from certain real cause-and-effect chains (and how they interact with randomness).
If free will does not exist, okay. What difference, practically speaking, does it make? Sam Harris is not going to stop being Sam Harris if we all accept "no such thing as free will".
Apart from it being a philosophical problem, what is the importance or lack of importance of humans having free will? Crime will still exist and so will punishment or reformation, even if we accept that nobody *chooses* anything, it is all the subterranean process of a combination of drives, heredity, environment, conditioning, etc.
I'm interested because it's an argument we are currently having, but what is the real-world implication of "Okay, I don't have free will, just the illusion of choice but in fact all my decisions are pre-made for me".
Jamelle cannot help being a gang-banger, he was destined from the Big Bang onwards to deal drugs, run a stable of hos, and drive-by shoot his criminal rivals. Fine, Jamelle cannot be held responsible in a meaningful way for 'choices' that he has no capacity to alter. But we still don't want drive-by shootings, so Jamelle is still going to jail.
It would make a HUGE difference to crime and justice if my country was being operated by people who didn't believe in free will. I don't know about ireland, but over here, prisons are meant to be unpleasant. They are meant to make prisoners suffer because they made immoral choices. In a world where we all decided Free Will didn't exist, I imagine prisons would be more into segregated housing. Or a kind of quarantine area, where we non-judgmentally remanded people who are unfortunately programmed to harm others.
(I'm American, BTW. And no, prisons are not harsh here due to simple budgetary constraints. It's ideological.)
I think the problem of horrible conditions in prison (and yes in Irish prisons too, it's just that in America everything is turned up to eleven) is separate from treating people as either having free will or being meat robots.
If prisoners are judged to be incapable of change and not to be held responsible because there is no way they can avoid doing crime since they are just meat robots running on their programming, then there is as much reason to skimp on doing anything but holding them till execution, end of their sentences, or natural death. Education? Intervention programmes? All useless, you can't hack the meat robots' programming.
Remanding them in a quarantine area could happen, with "once you go in you never get out" - even worse than any three strikes law - because "once you do a crime, you are demonstrating that your programming is to be a criminal and that can't be changed, so letting you out once you've served a sentence is stupid and wasteful.
And if they are nothing but meat robots, why waste any more resources on them than the bare necessities? We don't have any reason to treat them well because they're not people, now are they?
I don't know how Sam Harris approaches the Problem of Crime: does he only consider hardened and habitual criminals to be without free will, or does it apply even to the "first time criminal" and other, previously law-abiding citizens, who go in for white collar crime or a crime of passion? Since he has no idea why he's not a torture-rape-murderer, that leads me to think by the logic of his position he has to be consistent on "if you crime, you criminal, be it one crime or ninety".
I think you have falsely equated “person with no free will” with “meat robot incapable of change.” A person with no free will can subjectively FEEL like they have free will, because they subjectively feel as if they are choosing to commit crimes, go to jail, learn a valuable lesson and continue on with a crime-free life. But whatever this process feels like, and whatever it looks like from the outside, the entire process could be caused by involuntary psychological drives.
To put it in another way, why should we assume that one’s “meat programming” must necessarily put us into one of two categories, lifelong criminals or lifelong non-criminals? Surely our subconscious is more sophisticated than that. Why couldn't meat programming make one commit a major felony at 21 and then “neaten up?” why couldn't it make one live a law abiding life until the age of 70, at which point they murder an immediate family member?
As far as the Three Strikes law, the idea of a “strikes law” is fine by me. My only problem is that this is too few strikes for my liking. I could POTENTIALLY support a 10 Strikes law, depending on how it's exactly written the devil is in the details on that one.
"Surely our subconscious is more sophisticated than that."
Sammy the Whammy says no (at least in the extract Christina quotes). The two criminals he examples have no way of knowing what they really think/feel/.deep down motivation, and neither does he. He has no idea at all why he isn't out there torture-murdering, except mumble mumble genetics maybe? which are reliant on random chance shaking out the lots due to laws of physics? mumble mumble.
So there's no appeal to the subconscious, sophisticated or not:
"Whatever their conscious motives, these men cannot know why they are as they are. Nor can we account for why we are not like them. As sickening as I find their behavior, I have to admit that if I were to trade places with one of these men, atom for atom, I would be him: There is no extra part of me that could decide to see the world differently or to resist the impulse to victimize other people."
I mean sure, to some degree, how to interact with free will or the lack of it is a paradox, granted.
Sam Harris also says, "there is no free will, but your choices still matter," in that the actions one takes will have consequences. So it's best to act as if one has free will in the day-to-day, even if the random ideas that occur to one (Should I put another 5% in my 401k? Should I rob this bank?) are simply floating up into one's awareness without any "conscious" "decision" to bring those thoughts into focus.
So yes, Jamelle should go to jail, unless of course we someday develop a brain-manipulation tech which corrects all of the factors which led him to gang-bang and which removes his impulse to gang-bang entirely.
But on a more personal level, I've found Harris's arguments about free will as it relates to criminality and other anti-social behaviors to be extremely useful in minimizing any sense of real anger or hatred towards anyone, including people who are actively hostile toward me. Sam Harris's argument that there's no meaningful difference between Jamelle swinging at you with a butcher knife and a grizzly bear charging you across your lawn, allows me to hold my rage at Jamelle's antisocial behavior as lightly as I hold the grizzly bear's. (https://www.samharris.org/blog/life-without-free-will)
Well, when I remember Sam Harris's argument about free will, which is often, but not always, because of course I don't have any control over when I happen to remember something.
> "Whether criminals like Hayes and Komisarjevsky [two home-invaders who tortured and murdered (most of) a family] can be trusted to honestly report their feelings and intentions is not the point: Whatever their conscious motives, these men cannot know why they are as they are. Nor can we account for why we are not like them. As sickening as I find their behavior, I have to admit that if I were to trade places with one of these men, atom for atom, I would be him: There is no extra part of me that could decide to see the world differently or to resist the impulse to victimize other people. Even if you believe that every human being harbors an immortal soul, the problem of responsibility remains: I cannot take credit for the fact that I do not have the soul of a psychopath. If I had truly been in Komisarjevsky’s shoes on July 23, 2007—that is, if I had his genes and life experience and an identical brain (or soul) in an identical state—I would have acted exactly as he did. There is simply no intellectually respectable position from which to deny this. The role of luck, therefore, appears decisive.
> "Of course, if we learned that both these men had been suffering from brain tumors that explained their violent behavior, our moral intuitions would shift dramatically. But a neurological disorder appears to be just a special case of physical events giving rise to thoughts and actions. Understanding the neurophysiology of the brain, therefore, would seem to be as exculpatory as finding a tumor in it."
"Sam Harris's argument that there's no meaningful difference between Jamelle swinging at you with a butcher knife and a grizzly bear charging you across your lawn"
Then I should be able to shoot Jamelle as I would shoot a bear. However, I think the courts would disagree with me. And Jamelle is not a bear. If he is to be treated as we treat bears (because that is the level he is operating on), then he will lose or have severely curtailed a lot of human rights.
If Jamelle is not a bear but a person, then he is expected to behave like a person. A bear may not have the capacity to be reasoned with, persuaded, or to understand why it should stop charging me. We expect Jamelle to have that capacity.
If he doesn't, then we are entitled to treat him as we would treat a threatening animal. You and me baby are nothing but mammals? Sure, but I still think Sam would prefer to be treated like a human, not like a bear.
"Whatever their conscious motives, these men cannot know why they are as they are."
Piffle. If these two persons are of even average intelligence, they know damn well that torture and murder are not to be done. I won't even argue "they know torture and murder are wrong" because we're not even evolved to that level. But they do know "do this thing, get in trouble with cops and the law and go to jail and, depending on the state where the offences took place, get lethal injections".
"There but for the grace of genes go I", Sam? Then the best solution for society is to shoot them down like rabid dogs, since they could not have done other than they did and indeed "I have to admit that if I were to trade places with one of these men, atom for atom, I would be him: There is no extra part of me that could decide to see the world differently or to resist the impulse to victimize other people", then this is an argument for harsh, and not merciful, treatment. If they can't squash their impulses, then they are rabid dogs that need to be put down fast.
Besides, I'm damn sure Hayes and Komisarjevsky are perfectly well able to squash their impulses to victimise other people when they're in a situation and an environment where they'd get the shit kicked out of them if they tried it. How much torturing are they doing in jail, where their fellow inmates would shiv them for trying it on?
And funnily, we here in the pre-school service are wasting our time trying to teach small kids to behave, to share, to play together, not to bite and hit, to follow routines, to learn, in sum to squash impulses. Well gorsh, good job we have Sam Harris to tell us 'tis all in vain, we are rowing against the stream of Nature!
EDIT: Oh, and better tell Sam to revise that piece, he is inflicting hate speech violence by misgendering a valid woman!
"Linda Mai Lee (known as Steven Hayes at the time)
While incarcerated, Lee came out as a trans woman and began hormone therapy as part of her gender transition. In an interview in October 2019, she said she had been diagnosed with a gender identity disorder at 16, but never treated.
By 2025 she had changed her legal name from Linda Hayes to Linda Mai Lee."
Funny how these people find their inner femininity when they're facing long jail terms in men's prisons as rapists/murderers. I'm sure everyone is much safer now this Real Woman is in her proper place amongst other women in a women's prison. What does Sammy have to say about this? Doubtless "it's the genes, the genes!"
(Yes, I am being viciously sarcastic here, because I don't believe these jailhouse realisations of 'one's true nature' when it comes to being transgender after committing violent crimes against women).
>Then I should be able to shoot Jamelle as I would shoot a bear. However, I think the courts would disagree with me.
? Why would you think the courts would disagree with you? Not only would they say that you have the right to shoot him, they would say you have the right to shoot him if he had a rolled up newspaper, but you reasonably thought it was a knife. https://www.justia.com/criminal/docs/calcrim/500/505/
If you persuade the court you were in fear of your life, yes. If you try to persuade the court that anyway, it was like shooting a rabid dog and not a fellow human, less success with that approach, I feel.
You're misunderstanding some key points of this line of argument.
The analogy between the man and the bear isn't intended to completely equate them—just to equate their lack of free will. How they respond to environmental effects (broadly defined as anything non-genetic) is still different due to, among other factors, their different intellectual capacities.
The man has higher intelligence, understands language, has likely grown up in a culture. All of these affect what stimuli/incentives he reacts to, and how he reacts. So it doesn't follow, just because he doesn't have free will, that he doesn't have the capacity to be persuaded ("this is wrong to do") or that he doesn't respond to incentives ("you shouldn't do this because you're very likely to end up in jail if you do"). What we (meaning Sam Harris and the people here defending his argument, though not necessarily everyone who makes the sometimes-ill-defined claim "there is no free will") mean when we say that the man doesn't have free will is simply that *whether or not* he does in fact end up being persuaded by moral reasoning/responding to incentives/etc. in any given situation is not something that *could have been otherwise*.
That is, at the "moment of choice," he isn't really making a Choice at a fundamental level, no matter how much it feels to him that he is, but rather his actions are following from some complicated combination of his genetics, the results of his upbringing, his knowledge of the potential consequences of his actions, his sense of morality, random fluctuations in the quantum fields that govern the particles that make up his brain and body, and many other things—none of which are ultimately authored by him.
So no, it does not follow that it's useless to try to teach little kids how to behave in preschool—these experiences are likely to influence what kinds of people they will be and the kinds of behaviors they will tend to display (just as training a puppy might reduce the likelihood that it will display aggressive behavior as an adult). And regarding your point about the gender transitioning prisoner—again, if a criminal is trying to game the system, they are just responding to incentives, and there is no contradiction with the nonexistence of free will.
I think you (and many others do this too) are taking the claim "there is no free will" to be an assertion of some combination of lack of responsibility, moral relativism, opposition to punishment, and lack of respect for human dignity. I would balk at these things too (as would Sam Harris, as is evident from his other work), but that is not what is happening here.
You're actually right to imply in your earlier comment that not much practically follows from this argument. Personally I mostly think of it as an interesting intellectual question, but as Christina the StoryGirl suggests, there may be more practical applications in the future. And for now, as she also says, it's a reason to temper one's own anger and hatred in response to heinous acts, which is probably good for the soul (however literally you interpret that phrase).
>> ""Sam Harris's argument that there's no meaningful difference between Jamelle swinging at you with a butcher knife and a grizzly bear charging you across your lawn"
> "Then I should be able to shoot Jamelle as I would shoot a bear. However, I think the courts would disagree with me.
I literally own and am licensed to carry a gun in order to shoot people charging at me with butcher knives or otherwise acting with clear deadly threat. I even carry insurance to defend me in both criminal and civil court should I ever be forced to injure or kill someone in self-defense, because the vast majority of American courts very correctly recognize that even gravely harming one's attacker in clear self-defense is legally justifiable (even if the legal process of proving that is difficult and very, very expensive).
Normally I would address all or most points in a comment, but if you don't believe you have a fundamental right and duty to defend yourself with deadly force against a deadly threat (and you don't believe that some violent criminals cannot be deterred by persuasive arguments for why they shouldn't crime), I think we have a deep cultural and philosophical divide between us which likely can't be bridged.
My mother was an expert shot with a pistol, and kept one in the house, Even when she was in her 80's I trusted her judgment and her aim. Asked her once where she would shoot an intruder, and she said in the legs. That's always seemed to me like it would suffice to disable and distract an assailant, but I have no data to go on. What do you think?
"I literally own and am licensed to carry a gun in order to shoot people charging at me with butcher knives or otherwise acting with clear deadly threat. "
Yes. And if you say "I shot that black guy because he was acting like a bear, not a human being", how far would you get?
"some violent criminals cannot be deterred by persuasive arguments for why they shouldn't crime"
Isn't that Harris' argument as you quote it? The perpetrators cannot be persuaded because they can't identify their underlying motives because they lack any capacity to do so, their motivations are set in stone by the deterministic universe, they cannot and could not have acted other than they did.
Show me where "persuasive arguments for why they shouldn't crime" fits in there.
On the contrary, I believe that even making every allowance for really shitty upbringing, the perps could have chosen otherwise, and that if you can manage to find some iota of humanity within them, you can try the persuasive arguments as to why crime is bad and wrong. But that's not Sam Harris, according to what you quoted.
Oh, because I’d stayed up all night and it was 5 am or so where I am, and it was oddly nice to discover someone I knew awake, like another cricket chirping. Wondered what you were up to.
Yes, had you wanted to, you could have done something else. If all else prior to your action was equal, though - if you still intended to do the same thing, but for some inexplicable reason you did otherwise instead - that would be the opposite of free will.
You are physics, and physics is what links what you want to do to what you do. Physics is the thing that connects the cause of your intent to the effect of your actions; it is what means you have free will.
Imagine if it were otherwise - if you were trapped in your body, forced to watch it always /do otherwise/ in spite of what you want.
"You are physics, and physics is what links what you want to do to what you do. "
And yet:
"Romans 7:15-20
15 For I do not understand my own actions. For I do not do what I want, but I do the very thing I hate. 16 Now if I do what I do not want, I agree with the law, that it is good. 17 So now it is no longer I who do it, but sin that dwells within me. 18 For I know that nothing good dwells in me, that is, in my flesh. For I have the desire to do what is right, but not the ability to carry it out. 19 For I do not do the good I want, but the evil I do not want is what I keep on doing. 20 Now if I do what I do not want, it is no longer I who do it, but sin that dwells within me."
Or Poe's "The Imp of the Perverse":
"Induction, a posteriori, would have brought phrenology to admit, as an innate and primitive principle of human action, a paradoxical something, which we may call perverseness, for want of a more characteristic term. In the sense I intend, it is, in fact, a mobile without motive, a motive not motivirt. Through its promptings we act without comprehensible object; or, if this shall be understood as a contradiction in terms, we may so far modify the proposition as to say, that through its promptings we act, for the reason that we should not. In theory, no reason can be more unreasonable, but, in fact, there is none more strong. With certain minds, under certain conditions, it becomes absolutely irresistible. I am not more certain that I breathe, than that the assurance of the wrong or error of any action is often the one unconquerable force which impels us, and alone impels us to its prosecution. Nor will this overwhelming tendency to do wrong for the wrong’s sake, admit of analysis, or resolution into ulterior elements. It is a radical, a primitive impulse—elementary. It will be said, I am aware, that when we persist in acts because we feel we should not persist in them, our conduct is but a modification of that which ordinarily springs from the combativeness of phrenology. But a glance will show the fallacy of this idea. The phrenological combativeness has for its essence, the necessity of self-defence. It is our safeguard against injury. Its principle regards our well-being; and thus the desire to be well is excited simultaneously with its development. It follows, that the desire to be well must be excited simultaneously with any principle which shall be merely a modification of combativeness, but in the case of that something which I term perverseness, the desire to be well is not only not aroused, but a strongly antagonistical sentiment exists."
So there is a tension between the "I" which wishes to choose, and the physics which does. It's easy to see, on that basis, that the actions are what count and not the intentions, that the physical action is carried out by physics, and hence physics bears the rule and not the phantasmal "I" of "free will".
But then, the opposite query arises: what then is this tension between the 'will' and physics? If I am physics and physics is me, why is there disharmony between "what I want to do" and "what I do"?
How would you even know if the people around you could cook rice or not? Where would you be in a position to measure that unless you were actually cooking rice together, which surely is not something that happens a lot, at least for men like us.
Free action involves rational judgment. A judgment is rational to the extent it is not physics (See Miracles by CS Lewis). Hence, I, being capable of free actions, am not just physics.
If it is just physics, there is no "me". Stones don't have "me".
"Thou Art Physics" just asserts materialism, it doesn't defend it. Which is fine, I don't think Big Yud wrote it in order to defend materialism, his audience has always been materialists. But if you don't agree that we are "physics" then it doesn't provide an argument to sway you.
The whole debate is over the fact that "physics" (as in, atoms and energy following the laws of physics) does not seem capable of producing what we experience (free will, reasoning, etc). You might disagree and think that physics is capable of producing those things, but it doesn't answer the question to assert "Well, you are physics so physics must be doing all those things you are experiencing."
"The easiest way of exhibiting this is to notice the two senses of the word ‘because’. We can say, ‘Grandfather is ill today ‘because’ he ate lobster yesterday.’ We can also say, ‘Grandfather must be ill today ‘because’ he hasn’t got up yet (and we know he is an invariably early riser when he is well).’ In the first sentence ‘because’ indicates the relation of Cause and Effect: The eating made him ill. In the second, it indicates the relation of what logicians call Ground and Consequent. The old man’s late rising is not the cause of his disorder but the reason why we believe him to be disordered. There is a similar difference between ‘He cried out ‘because’ it hurt him’ (Cause and Effect) and ‘It must have hurt him ‘because’ he cried out’ (Ground and Consequent). We are especially familiar with the Ground and Consequent because in mathematical reasoning: ‘A = C because, as we have already proved, they are both equal to B.’
"The one indicates a dynamic connection between events or ‘states of affairs’; the other, a logical relation between beliefs or assertions.
"Now a train of reasoning has no value as a means of finding truth unless each step in it is connected with what went before in the Ground-Consequent relation. If our B does not follow logically from our A, we think in vain. If what we think at the end of our reasoning is to be true, the correct answer to the question, ‘Why do you think this?’ must begin with the Ground-Consequent ‘because’.
"On the other hand, every event in Nature must be connected with previous events in the Cause and Effect relation. But our acts of thinking are events. Therefore the true answer to ‘Why do you think this?’ must begin with the Cause-Effect ‘because’.
"Unless our conclusion is the logical consequent from a ground it will be worthless and could be true only by a fluke. Unless it is the effect of a cause, it cannot occur at all. It looks therefore, as if, in order for a train of thought to have any value, these two systems of connection must apply simultaneously to the same series of mental acts.
"But unfortunately the two systems are wholly distinct. To be caused is not to be proved. Wishful thinkings, prejudices, and the delusions of madness, are all caused, but they are ungrounded. Indeed to be caused is so different from being proved that we behave in disputation as if they were mutually exclusive. The mere existence of causes for a belief is popularly treated as raising a presumption that it is groundless, and the most popular way of discrediting a person’s opinions is to explain them causally—‘You say that ‘because’ (Cause and Effect) you are a capitalist, or a hypochondriac, or a mere man, or only a woman’. The implication is that if causes fully account for a belief, then, since causes work inevitably, the belief would have had to arise whether it had grounds or not. We need not, it is felt, consider grounds for something which can be fully explained without them. "
Isn’t this just equivocation, though? Sure, the English word “cause” means more than one thing, but why should that fact prove anything about determinism?
“The implication is that if causes fully account for a belief, then, since causes work inevitably, the belief would have had to arise whether it had grounds or not” - that’s simply wrong on the face of it. A belief has grounds if there is a cause and effect chain linking it to the things the belief is about. e.g, if I profess “the cat is on the mat” after photons reflected from the cat hit my eyes, this is grounded in a way that my professing this after merely dreaming the cat is not.
The implication when people say things like the ones in Lewis’s examples is that the beliefs are not grounded because the cause and effect chain leading up to them are rooted in something other than the true state of the world that the belief is about; certainly not that things would somehow be better if no causal chain linking the map and the territory existed at all, that’s crazy talk! "You say this because you dreamt that cat" is an accusation that you are wrong beause the cause/effect chain grounds in something other than the actual state of the world being professed, not an accusation that you are wrong merely because a cause/effect chain exists!
Much as we may dump on Yud, he has a sequence of posts about this too:
I agree that Yud does not provide a comprehensive defence in thou art physics. OP did not open this thread with philosophical rigor, however; opened with a complaint that they had difficulty /feeling/ compatibilism is true - that their intuition was that it is not enough: "this feeling is deeper than any feeling about conclusions of physics".
Indeed, you make a similar complaint with “...does not seem capable of producing what we experience”. As a description of how that intuition arises and what an alternative way might be like, “thou art physics” doesn’t do too bad a job. (It also has the advantages over, say, Dennett’s books of being available online and being a forum chat sized read someone completely fresh to it might plausibly actually go and read in this kind of setting instead of, y’know, a whole damn book).
At the end of the day, I am not a telepath and can't interact with your feelings directly. The best I can do is gesture to alternative ways of being and hope that at some point enough things click for the reader to get some kind of sense of what it is like to be in someone else's head, thereafter leaving them with enough alternatives to choose between that they can support their beliefs, whatever those ends up being, with more than just the feeling that they are trapped into believing things by their intuitions.
>A belief has grounds if there is a cause and effect chain linking it to the things the belief is about...if I profess “the cat is on the mat” after photons reflected from the cat hit my eyes, this is grounded in a way that my professing this after merely dreaming the cat is not.
Both seeing a cat and dreaming about a cat are caused in the "cause and effect" sense, or else they wouldn't happen. Only one of them is grounded however. That's Lewis's whole point: recognizing that seeing a cat which is really there and dreaming of a cat which isn't really there requires ground and consequent chains of reasoning, while cause and effect chains of causation do not have to be logically grounded at all. You can have cause and effect chains of causation that produce completely ungrounded beliefs, such as dreaming of a cat, a drunk man hallucinating. This works fine for non materialists: they can explain the ungrounded beliefs, like the dream and the hallucination, as being explained by physical chains of cause and effect, while explaining the grounded beliefs as being caused by chains of logic. Yet if those chains of logic are actually caused by the same chains of cause and effect that create ungrounded beliefs, then we can have no confidence that beliefs we arrive at due to logic are different than beliefs we arrive at due to chemistry. All beliefs are due to chemistry, the logic is just what it feels like when the chemistry is happening.
As Lewis puts it:
"Acts of thinking are no doubt events; but they are a very special sort of events. They are ‘about’ something other than themselves and can be true or false. Events in general are not ‘about’ anything and cannot be true or false. (To say ‘these events, or facts are false’ means of course that someone’s account of them is false). Hence acts of inference can, and must, be considered in two different lights. On the one hand they are subjective events, items in somebody’s psychological history. On the other hand, they are insights into, or knowings of, something other than themselves. What from the first point of view is the psychological transition from thought A to thought B, at some particular moment in some particular mind, is, from the thinker’s point of view a perception of an implication (if A, then B). When we are adopting the psychological point of view we may use the past tense. ‘B followed A in my thoughts.’ But when we assert the implication we always use the present—‘B follows from A’. If it ever ‘follows from’ in the logical sense, it does so always. And we cannot possibly reject the second point of view as a subjective illusion without discrediting all human knowledge. For we can know nothing, beyond our own sensations at the moment unless the act of inference is the real insight that it claims to be.
"But it can be this only on certain terms. An act of knowing must be determined, in a sense, solely by what is known; we must know it to be thus solely because it is thus. That is what knowing means. You may call this a Cause and Effect because, and call ‘being known’ a mode of causation if you like. But it is a unique mode. The act of knowing has no doubt various conditions, without which it could not occur: attention, and the states of will and health which this presupposes. But its positive character must be determined by the truth it knows. If it were totally explicable from other sources it would cease to be knowledge, just as (to use the sensory parallel) the ringing in my ears ceases to be what we mean by ‘hearing’ if it can be fully explained from causes other than a noise in the outer world— such as, say, the tinnitus produced by a bad cold. If what seems an act of knowledge is partially explicable from other sources, then the knowing (properly so called) in it is just what they leave over, just what demands, for its explanation, the thing known, as real hearing is what is left after you have discounted the tinnitus. Any thing which professes to explain our reasoning fully without introducing an act of knowing thus solely determined by what is known, is really a theory that there is no reasoning.
"But this, as it seems to me, is what Naturalism is bound to do. It offers what professes to be a full account of our mental behaviour; but this account, on inspection, leaves no room for the acts of knowing or insight on which the whole value of our thinking, as a means to truth, depends."
All feelings are interpreted. You could interpret your feeling as "figuring out which one of the seemingly possible futures is the real one".
As an analogy, imagine that you are preparing for a sport competition. You could win, you could lose, you could end up in any place... there are many possible outcomes. And the real outcome depends on how hard you try. And yet there are external factors. You cannot "just choose freely" to get the gold. You can only choose to try hard, but whether that gets you the gold, depends on many things, like the capabilities of your body, what the other competitors do, the weather, etc.
You could apply a similar perspective to willpower. You can (and perhaps should) try, but the result depends on whether some parts of your brain betray you, and other circumstances.
And maybe you can go further and apply that perspective to everything. (Not sure, didn't try.)
Another example of how intelligence alone isn’t enough for individual survival. Two people with survival training were put down in the Amazon Rain Forest, and they would’ve starved to death if they hadn’t been retrieved in three weeks (evidently this is some sort of bizarre TV series?).
A while back someone asked how long would it take 1,000 people to rebuild civilization if they were on their own. I suspect they’d just die. Or maybe it was 10,000. I suspect they wouldn’t do much better unless they already had tools and seed stock.
You make the mistake of thinking 1000 people are just 1000 individual people, but cooperation is what allowed humans to avoid death and become apex predators. There is a qualitative difference between a group and an individual.
The other key ingredient, beyond cooperation, is culture, where strategies for survival and resource extraction are passed down from generation to generation. But a 1,000 people taken at random from Silicon Valley tech companies? Unless they all went through training in how to extract resources from the wild, I wouldn't give odds on them being very successful.
Although it was a much smaller group, I'm thinking of the Donner expedition. They were well-equipped, but they couldn't deal with getting snowed in. Meanwhile, the indigenous Washoe people could survive winter in the Sierras because their culture gave them the skillsets they needed to survive.
Absolutely agree that 1000 random people would die pretty fast, but to be fair in the original thought experiment you get to hand select people with specific skills and personalities.
I thought I was pretty clear. I added some emphasis to my OP to mitigate confusion...
> Another example of how intelligence alone isn’t enough for *INDIVIDUAL* survival. Two people with survival training were put down in the Amazon Rain Forest, and they would’ve starved to death if they hadn’t been retrieved in three weeks...
Even trained survivalists would have trouble on the own over the long-term.
> A while back someone asked how long would it take 1,000 people to rebuild civilization if they were on their own. *I suspect they’d just die.* Or maybe it was 10,000. *I suspect they wouldn’t do much better unless they already had tools and seed stock.*
But the "we're smart enough to make tools from scratch" crowd doesn't seem to buy my thesis. ;-)
I know somebody who would do fine in that situation, because they have been in that kind of situation many times before. The individual in question has commented that the survival techniques are all illegal now; hunting and fishing techniques that are considered "unsporting" (they're too effective).
But they're also in their 60s, and learned in their own youth from old men who had also done that kind of thing; I don't know anybody younger than that with those kinds of skills.
The issue, quite simply, is technological; it's not that the technologies are lost, exactly, so much as that knowledge of them is very thinly distributed. Do you know how to pick up and position a 2,000 pound rock into an elevated location using nothing but materials you'd find in a forest? I didn't until I watched a video of a guy doing it and went "Oh, that's incredibly clever."
Take 1,000 average city-dwellers, and yeah, they'd probably die. Take 1,000 average rural people and they stand a better chance (mostly leaning on the elderly's memories, whose grandparents taught them the survival skills you'd need).
Take 50 very carefully selected people and they could rebuild civilization, though.
Yes. If we trained a 1,000 youngsters in a wide variety of survival skills, with some specializing in specific skills (such fiber extraction, thread and rope-making, and net weaving, creating medicinals from native plants, etc.) and gave them a variety of steel tools, including knives, hatchets, fishing hooks, awls, adzes, and the such) And then, as young adults, they left them on a parallel Earth (with the fauna of the Pleistocene) in a resource-rich area of a temperate zone (large marshes would be ideal), I'm sure they'd survive to reproduce and then increase their numbers. The genetic bottleneck issue might come back to haunt them down the line, though.
OTOH, even if they knew how to knap flint and chert, and how to tan and work leather, and we dropped them naked on the parallel Earth without tools or clothing, I'd lower the odds of them flourishing and populating the parallel Earth.
Those who received a supply of tools would likely have functional villages with permanent dwellings built within a year. The ones without the tools would have to work longer and harder to reach that village state. They could probably do it with some die-off the first winter. The question is, would the initial die-off be large enough to no longer have a sustainable breeding population?
Either way, civilization would be long in the future, though. Domestication of grains and herd animals would have to start over. And they would need some mechanism of intergenerational memory to know that such things were even possible.
But if the settlers of this parallel Earth received seed stock and a variety of domesticated animals, civilization would happen quicker—if they could keep their crops growing and herd animals reproducing during the initial years.
You can skip straight to iron, you don't need to start in the stone age; you just need wood, sand, iron ore (if it's dirt and it's red it will probably work, you're), and, ideally, a good bend in a waterway. Also two or three days, because you need to make charcoal as an intermediate step, because you need some carbon to add to your ore.
If you can find a source of wax you can cast some fairly complex stuff, but you can do simple stuff like a primitive iron hammer with just sand, and then iterate from there.
Hmmm. How many people could do this without (a) the knowledge (b) and practice? And without the tools on the list in this link, it's going to be hella harder.
Put people down in the wild, without tools, most of them are going to die before someone has the free time to smelt iron. And the memory of such things would likely be lost within a generation. I don't see why this is in the least bit controversial. ;-)
Hence, fifty well-chosen people could rebuild civilization, but randomly choosing 1,000 people wouldn't necessarily end as well.
I could get iron smelting up and running on my own, personally, although I'd expect a couple of weeks to get it running properly (I have some spare weight, so starvation isn't an immediate issue). Need to hand-pan some black sand (which is mostly hematite and/or magnetite) out of your dirt, use that to cast an iron pan (cast a disk and hammer it into shape with a rock while it cools) to speed up panning and also provide a way to boil water, then you're going to want to cast a primitive hand axe, which would allow the construction of a wooden sluice, which automates most of the panning work.
Typical earth contains 5-10% black sand by weight; if your dirt is red the concentration is probably higher.
If you're lucky enough to have some suitable rocks around you to knap into blades, you may be able to skip the blade and maybe even the axe, but personally I'd just go straight for the iron, because I actually know how to do that.
The method described in that article is basically right, no surprise, but too complicated. The furnace is the issue, and can be solved by building a variation on a Dakota Fire Hole, which is what we want the bend in a waterway for (also we can then set up our sluice in the waterway itself) - the high bank is an ideal place to dig out a primitive furnace, because the curved embankment will help capture airflow for our intake. We don't have the materials to build a bellows, after all, and our fuel is going to be low-quality (we're going to be relying on fallen branches and other wooden debris for a while, properly cured wood takes time).
You don't want to use the fire hole for cooking, mind - we want our fires to be inefficient, because they're going to be where we are sourcing our charcoal until we can get a proper operation going, and the fire hole is too good at what it does.
That would make a great idea for a reality TV show. Put down several teams of two or three people in there wilds — in places where iron ore is available. Let them have food and water, so they wouldn't have to worry about hunting and gathering. But give them *no tools.* Whichever team is able to create the first iron hatchet from natural resources wins a big prize. We could call the series "The Iron Age," or something like that. The teams will never have worked with iron before, but they'll each receive a booklet outlining the basic steps to follow.
> evidently this is some sort of bizarre TV series?
It's clear you haven't watched it, or else you wouldn't be trying to draw conclusions from it
The premise of the show is basically: Take a woman with survival skills and a "macho macho" man who -thinks- he has survival skills and drop them somewhere, comedy ensues
Well, you've confirmed my assumptions that it's a bizarre premise. According to the link I posted, it appears that these two individuals wouldn't have survived much longer if they hadn't been removed from the situation. I'm not sure of the comedic value.
BTW, I don't own a television, and I haven't watched network TV since circa 1985, so I pick up most TV-related knowledge secondhand.
There is another show called Naked And Afraid where they drop True survival experts into wilderness situations and failure is COMMON. (They are not all experts but some of them are real professionals who come to grief)
Hey, I'm the same as you about TV, and even stopped about the same time. I had an el cheapo black and white one in grad school, and often ended the day by smoking weed and watching Letterman, but I don't recall watching anything else on it, and that's the last era when I was a TV owner. I never decided against being a TV owner, just kind of drifted away from it, and now the sound of TV is like a cheese grater on my nerves -- especially the over-bright plasticky voices in ads and comedies, and the pseudo-neutral drone of the news anchors. Did you drift away too, or decide one day to throw the fucker in the trash?
After classes, I would get stoned and get sucked into game shows and dumb sitcoms. I noticed that TV was big time waster. Also, I noticed that at parties, when someone turned on the TV, it would kill all conversation. Everyone would get hypnotised by the TV.
Back then, there were bumper stickers that read, "Shoot your television!" I mentioned to some friends that that sounded like a good idea. One of my friends thought it would be fun to turn my big old Sylvania Color TV into a terrarium for his iguana. So, I borrowed my mom's Ruger .357 Magnum, and with the help of my friend, I carted my TV down to the local gravel pit and shot it. What a mess! The cathode ray tube imploded, and its glass shattered into fine pieces. The glass shards were like fine dust. It was useless as a terrarium. And I probably dumped a bunch of toxic materials into the environment (a stupid move on my part!).
But in the hours when I could have been bettering myself by watching Vanna spin the wheel and Alex do his thing, I read thousands and thousands of pages of novels, history, philosophy, and science. What a waste! <snarkasm>
My theory is that humans are naturally selected to be susceptible to TV. We spent hundreds of thousands of years safe around campfires, watching the flames dance while someone sang or told us stories. Being hypnotised by campfires kept us safe and close at night. And TVs fill that behavioral spot in civilized humans.
Unfortunately, I can get sucked into streaming video. Luckily, I have a low tolerance for stupidity in shows. But every so often, I get sucked into a good series, and I end up binge-watching them on my laptop. The latest season of Slow Horses just started up. And the next season of The Diplomat is due for release in couple of weeks. Sigh. I can't completely escape my addiction.
The show Alone is a good example of how—with minimal tools to start—survival without civilization as we know it is extremely difficult.
Alone is obviously people by themselves, but they’re the best trained people the show can find for such circumstances (with 10 tools of their choice) and they barely make it 100 days. I really doubt a larger group would fare differently.
Safe food, water, and shelter are just incredibly difficult and labor intensive to procure. Tools, supply-chains, and modern manufacturing really are basically magic.
One thing that makes Alone especially hard is that hey start them off at the end of autumn or the beginning of winter. I get the sense that quite a few of the more able contestants could go indefinitely if they had the whole spring and summer to stock pile food. They're also limited to staying in one spot and can't move to find better fishing or game spots. The arctic region is also more hostile than most places on earth would have been before civilisation.
They're also forced to stay in the camp / area they've been assigned by the producers, and of course they have to follow the law with regard to hunting (they can't kill animals out of season, etc).
Yeah, that's definitely loading the dice. No stockpile of food, during the lean season, starting off from zero - even experienced people who know the environment will struggle, and dump someone there who has never lived in that place so doesn't know the cycles of weather, foraging food, etc.? Setting them up to fail.
I still think you’re underestimating the difficulty in the details…
Some examples:
How many people would be able to orchestrate running a herd of buffalo off a cliff? Or “run a deer to death” (which I’ve never heard of and as a hunter myself seems virtually impossible)?
Who’s going to build an extensive smoker system for a buffalo herd’s worth of meat and get everything smoked before the meat rots in the heat? How will the predators/scavengers be successfully kept at bay from the meat? How much will eating near rock-hard smoked meat for months nourish the population?
These are skills that our ancestor populations gained over centuries/millenia that are virtually gone now and can’t just be picked back up like riding a bike.
Humans can indeed outrun certain species of deer, by chasing them until they collapse of exhaustion. This is a venerable method of hunting on the African savannah. It would not work in a forested area.
I have hiked 40 miles in a day when I was younger. Admittedly, doing it again several days in a row is harder than doing it once, but I would expect typical humans in reasonable physical condition to be able to do it. Humans are really good at persistence hunting.
(Arguably, hiking Yosemite Upper Fall, or the Grand Canyon down to the river and back, is harder work than 40 miles on the flat, because of the change in elevation)
This demonstrates a very weak proposition. Yes, the Amazon is nasty place to be, and it's hard to build civilization there. Now, if they'd conducted this experiment in something like, say, the Nile Delta, I expect they'd have fared better (and made for a rather different TV series).
The Nile delta of today or the Nile delta of pre-civilization times? I immediately flashed on malaria, crocodiles, and water-born parasites. Oh my.
I suspect most of the world is relatively inhospitable to humans alone without the necessary tools and culturally transmitted survival knowledge. Arctic? Nope. Boreal forests? Less deadly than the Arctic, but there's only a short season to build shelter for the winter. Deserts? Nope. African savannahs? Nope (but ironic because this is where humans presumably evolved). The same goes for prairies (which were largely avoided by indigenous tribes before the horse was introduced to North America). Mediterranean climates? Possibly. Temperate woodlands? Possibly, but you've still got winter to deal with.
As for the Amazon basin, it used to be the center of a massive agricultural civilization with roads and cities, probably supporting millions of people until the Spanish and Portuguese arrived. The rainforest seems to be a relatively new thing.
> A while back someone asked how long would it take 1,000 people to rebuild civilization if they were on their own. I suspect they’d just die. Or maybe it was 10,000. I suspect they wouldn’t do much better unless they already had tools and seed stock.
What do you think the minimum number required would be to rebuild civilization without tools? Or do you think the no amount is sufficient, it’s about how many generations they have to rebuild?
The only semi-successful example I can find is the mutineers of the HMS Bounty and they started out pretty well provisioned with tools.
Hi Liam. Have been wondering how your family is, kwim? If you have a chance and are so inclined, let me know -- but I understand what busy is, so no problem if you don't.
Survival and civilization are two different questions. And what level of civilization are you aiming for?
Without any modern steel tools, If the initial group of 1,000 or 10,000 had people trained in fiber extraction and twine and rope manufacture, flint-knapping, trapping techniques, hunting (including skinning and field dressing a carcass), leather tanning and leather working, basic midwifery, and probably a bunch of other "primitive" skills I haven't thought of, a group of 10,000 people who kept in contact with each other could probably pull themselves up to a Mesolithic level of resource extraction and survival—but with a significant chance of going extinct some point. Even if they were to keep their birth rate high enough not to go extinct in a few generations, they'd face a bunch of genetic issues due to inbreeding (I wonder if the Sentinelese people have enough genetic diversity to survive for the long-term). If the initial group were only 1,000 people, even with the necessary skill sets, I think there would be high chance that they'd face extinction within a few generations.
A civilization of even Bronze-Age level sophistication is out of the question for tens of thousands of years unless the founder population had access to durable books that described the technologies accumulated over the past ten thousand years. And the founder population would have to create some strong cultural institutions to keep at least some of their descendants literate. Otherwise, we're talking tens of thousands of years or hundreds of thousands of years before humans reinvent civilization.
Here's a fun timeline of the evolution of human tools. It took a long time for humans to figure out all this stuff. I don't think it would go any quicker without some repository of knowledge.
Yeah. SF stories of time travellers going back to, say, Ancient Rome and speedrunning civilisation with their superior technical knowledge are great fun, but not realistic.
It doesn't matter how smart you are and how much theoretical knowledge of metallurgy etc. you have. Great, you have 160 IQ and the history of the Industrial Revolution memorised. Now it's you, your bare hands, and a rock in a forest. Good luck re-creating the 19th century level of society in a month!
I came at a similar realization after a handful of camping trips. I'm an average camper/hiker, so nowhere near where these two contestants were at in terms of skills and physical stats, but it quickly became apparent to me that the only things standing between me and sure death were: thin synthetic textiles and close proximity to a climate controlled, self-propelled shelter. Ie. my weekend backpacking trip is merely living on borrowed time thanks to a set of extremely controlled parameters related to weather and distance to very solid shelter.
Actually met a hiker in need a month back in the Adirondacks, which served as a real life warning how a small mistake can quickly escalate to risk of life, where she got caught in a storm that somehow wet her gear, which deprived her of sleep and warmth, which translated to minor scuffs and eventually falls. She made out A-OK with some help, but I could easily a person spiral into making risky decisions, spraining an ankle, getting lost/falling into a ravine and that being the end of the line. Very educational.
I had the same experience. Backpacking in hot weather, making sure each day took me past a good water source. First night, arrived at camping spot to find a tiny stream of duff-filled water coming down the hill at slightly more than a drip. Spend a couple hours collecting, filtering and sanitizing about 2 quarts of cloudy water, went to sleep worrying what I'd drunk would make me sick. It didn't, but I bailed the next day. 2 quarts was barely enough to rehydrate me from the day, and what if the water at the next place simply was not there?
Funny you mentioned that because on that same hike I did come across a guy that was coming down the mountain like a storm because he was trying to increase his body temp after having nearly succumbed to hypothermia.
"Nets aren't that hard to make either, given decent fibers (cotton, hemp etc)."
First, plant your cotton...
*IF* you are in a fertile area and a temperate to tropical climate with fast growing season and plenty of game, fish, etc. around to hunt, where there isn't the likelihood of freezing at night or needing more than basic shelter, then sure - sharpen your stick, dig in the soft, humus-rich earth, plant your seeds and fish/hunt plus forage for wild plants while you wait for the crops to grow.
And hope you don't succumb to illness, starvation, or natural disasters in the mean time.
Here is where I invoke the Famine. 19th century Ireland, part of the British Empire, all kinds of crops grown - and yet the poorest died of starvation. Questions such as "why didn't people fish, we're an island surrounded by the sea and with plenty of rivers?" are often asked, and there are reasons for why "no, they couldn't just live on fish alone".
> Tools aren't that hard to make, depending on what you need. a pointed stick will let you plant seeds, for example. Nets aren't that hard to make either, given decent fibers (cotton, hemp etc).
Hypothetically, if I put you down somewhere random where there were plants that have been useful for fiber extraction (with no phone app to identify them and no Youtube videos to show you how to extract the fibers), how long do you think it would take to figure out which plants have suitable fibers and the best way to extract their fibers?
Next, you'll be given the task of spinning those fibers into twine or thread. Twisting the fibers together by hand, will probably not work. Back in the Mesolithic, they would bore a hole in an antler and twist the fibers through the hole to make rope or a thick twine. We know this from analyzing the wear marks on antlers with holes in them that archaeologists used to think were ritual batons. They may have used flint awls to drill the holes. Or possibly simple bow drills, but you'd still face the problem of bootstrapping your twine production and shaping the drill wood.
I took a basic flint-knapping course way back when I was an undergrad in Anthropology. I would have flunked out of the Mesolithic, and my Paleolithic ancestors would have hooted at my attempt at an Acheulean hand ax. And even if flint were locally available, how skilled are you at identifying it? Flint can look like any other river cobble until you crack it open with a rock hammer (which you don't have in this hypothetical scenario). But say you solve the twine production problem without starving to death first, do you feel you could weave together a fish net that would be useful? Even looking at one as a model, I don't think I could.
And starting a fire without matches? You need sturdy twine and an axe to create a bow drill and a hearth stick. I couldn't bootstrap all the necessary materials to do this. I need my trusty survival hatchet, a hefty survival knife, some strong twine, and preferably a saw, and infinite patience on a dry day. ;-)
People underrate the skillfulness of "primitive" people.
Modern cotton is the result of human breeding. The original, which is still available, isn't a particularly useful fiber source, but there are theories (i.e., just so stories) about why it was chosen as a future fiber of choice for the Incas.
I grew up in NE US, and I couldn't identify the local fiber sources that the Indians used without a manual (or in my case ChatGPT). They are: dogbane, milkweed, basswood, nettles, and slippery elm. Milkweed and nettles I could ID, but I wouldn't know how to extract the fibers from them. The other three, I wouldn't know how to ID them.
But the point was, even people with the skills have trouble surviving alone. Brad DeLong's conclusion was: "it was and is not the ability to think up clever solutions to problems on the fly. Instead, it was pooled memory and anthology thinking-power, plus the division of labor that allows us to carve tools that contain the results of that collective thinking-power." With my editorial note being: i.e., cultural practices and memory that have learned to exploit specific environments.
AI 2027 contends that AGI companies will keep their most advanced models internal when they're close to ASI. The reasoning is frontier models are expensive to run, so why waste the GPU time on inference when it could be used for training.
I notice I am confused. Couldn't they use the big frontier model to train a small model that's SOTA for released models that could be even less resource intensive than their currently released model? They call this "distillation" in this post: https://blog.ai-futures.org/p/making-sense-of-openais-models
As in, if "GPT-8" is the potential ASI, then use it to train GPT-7-mini to be nearly as good as it but using less inference compute than real GPT-7, then release that as GPT-8? Or will the time crunch be so serious at that point that you don't even want to take the time to do even that?
Thomas Lee has written an in-depth critical review of Yudkowski and Soare's "If Anyone Builds It, Everyone Dies." But I received the link via email, so I do not know how to link to the substack from here, so if someone else could provide that, I would appreciate it.
Could the Germans have won WWII had they not attacked the USSR?
We now know that Stalin was planning to attack Germany in 1942 or 43, and that the Ribbentrop-Molotov Pact was only meant to give him time to prepare for that. If the Germans had spent that same time building up defenses on their eastern border, maybe it would have kept the Red Army out, or proven so formidable that Stalin would have reconsidered his invasion.
The Soviets fought harder and were more devoted to victory because the Germans were the aggressors against them and committed many atrocities against Soviet civilians in captured areas. Without that, if the Soviets knew they were the ones fighting a war of aggression on foreign (German and Polish) soil, their morale would have been lower, which would have translated into worse battlefield performance and a greater willingness to end the invasion if it proved harder than expected.
Something that the other commenters haven't touched on: trade between the two parties was very lopsided in the favor of the USSR, long term. Germany needed a lot of raw materials, especially grain and oil. The Soviets demanded machine tools and working examples of German industrial machinery in exchange. So the Soviets were building up their industrial capacity and knowledge, which would only make them even more of a military threat as time went on. Meanwhile the Germans needed Soviet supplies just to keep functioning at the same level.
WoD argues that the main bottle neck for the German military industrial system was arable land to grow food and access to both of which would have been solved if Germany had managed to capture the south western part of the USSR. Without capturing those regions Germany would have been dependant on trade with the USSR to not likely succumb to either food or fuel shortages. Which is a big risk to place on an unreliable trading partner.
In the grand scheme of things though, in a three way fight between the USSR, Germany and the Western powers, having the Germans and soviets expend themselves fighting each other is the ideal outcome for the Western powers, who win the war with relatively little fighting, and a major game-theoretic loss for the Germany and the USSR who both take extreme losses that mostly cancel each other out benefitting neither. So there would have been a huge mutual benefit for Germany and the USSR if they had been able to corporate longer than they actually manged through Molotov-Ribbentrop.
Realistically though either one placing trust in the other would have been an extreme risk since they were so ideologically at odds, and making a pre-emptive first strike at an opportune moment was probably the more cautious move for Germany to make counterintuitively.
It's funny, I'm a huge fan of the book (have reread it a few times even) and my major takeaway was that the combination of Nazi ideology, and the economic consequences of enacting that ideology, made some kind of catastrophic conflict more or less inevitable for Germany. It also neatly explained the seeming paradox of how you could conquer the most industrialized and richest corner of the world and still have to manufacture all your own stuff at great expense.
As an aside, the most chilling part of the book for me was the explanation of paradoxical tensions of their extermination camps and slave labour economy, leading to an insane situation where company accountants were bidding for slave labour and then trying to weasel just enough food out of the government (which wanted to starve them to death asap) to extract useful work out of them before they could expire. Generally the book was very good at making me understand how you could logically, rationally proceed from an insane premise and end up in the position of writing on behalf of Bayer to ask for your allocation of death camp slaves to be increased this quarter. Which is a pretty good intro to macroeconomics in general, actually.
Define "Germans" and "won". The Nazis conducted an unsustainable arms buildup specifically to enact a war against the East for lebensraum, with the ultimate idea of conducting a sustained war of racial survival against the "seat of world Jewry" - the United States. That's literally the plan here, as laid out by Hitler. And they acted accordingly, to the extent that by the late 1930s their economy was overheating and they were facing a currency exchange and exchange of trade crisis. Which they then solved by economically stripmining western Europe and whatever bits of they East they got ahold of.
So long as you have the Nazis running Germany, you have the buildup of forces and the need to put them to use to 'solve' the economic problems they cause. Their ideology gives them a plan and direction (conquer Europe, empty and colonise the East) and all these forces then lock them into a total war against the three largest powers on the planet (the US, USSR and British empire).
Only in a completely ahistorical scenario, one with no Nazis and no Hitler, do you end up with a 'normal' reactionary authoritarian in charge. And then the most likely outcome is a more limited war to 'retake' various bits of Europe that probably results in a drawn-out campaign against France and/or Poland (no massive, unsustainable switch to a wartime economy in the 30s means a smaller, less well equipped German army). At which point the British inevitably get involved and Germany ultimately loses or settles and keeps some of its gains. That's about the best outcome the Germans could realistically have expected.
"Winning" is not well defined for the Germans in this case. They can't invade and totally subjugate their enemies the way we did to them, but they could perhaps reach a negotiated peace that lets them keep the things they really wanted.
Winning is fairly easy to define. Winning means that Germany gets control of whatever resources they can profitably extract, especially oil fields and fertile farmland in Western Russia, and that Russia makes no effort to build an army with which to attack Germany. For the latter, you don't need a German soldier on every crossroad, just a handful of officials to oversee the remaining Russian diplomacy, military, and economy.
I know little about history, so this is totally an uninformed guess:
It seems to me that the main mistake Germans made was trying to conquer too much. However, not attacking USSR was probably not an option, because in that case USSR would attack them later. The correct move would probably be to avoid making an enemy out of UK.
What I would do, in Hitler's place: Make it a part of my ideology that Britain is also a superior race (perhaps second only after Germans), and emphasize the similarities of my plans with British imperialism (basically that Germany is going to be Britain 2.0). And of course, not attack Britain. They wanted to avoid the war anyway, now they would have no reason to join it. I might even offer some part of France to them (as a compensation for some kind of conflict they had with France in the past). Later, when Japan attacked USA, I would throw Japan under the bus. After taking Poland and France, I would make it clear to countries in Western Europe that they are safe, and fully focus on the eastern front. I would accept Ukraine as an ally against the rest of the Soviet Union. I might offer territories near Leningrad to Finland, if they help me conquer the city.
But of course, someone thinking this carefully would probably not have started WW2.
> What I would do, in Hitler's place: Make it a part of my ideology that Britain is also a superior race (perhaps second only after Germans), and emphasize the similarities of my plans with British imperialism (basically that Germany is going to be Britain 2.0). And of course, not attack Britain.
He did believe that the British were superior and Germanic, admired the British, and talked up the British Empire.
He didn’t want war with Britain, but we declared war after he invaded Poland.
>Could the Germans have won WWII had they not attacked the USSR?
Very probably not. By mid-1941, Britain is firmly committed to the war and the US is actively supporting Britain with money and supplies. The US Navy started shooting German U-Boats on sight in September, and full American entry into the war was very likely to happen even without Pearl Harbor. Germany had already tried and failed to bomb Britain into submission. Invasion was pretty much impossible due to Britain being surrounded by water, having a much bigger navy, and Germany having no landing craft and very few seaworthy transport ships: the Operation Sealion warplans proposed using Rhine river barges towed by destroyers as invasion transports. Starving Britain into submission by sinking merchant ships was theoretically possible but still a pretty long shot.
Germany also had the major handicap of having very limited access to food and petroleum products: the US was the largest exporter of these, and the British blockade cut off most other sources. Staying at peace with the Soviets would have helped Germany somewhat, since Germany was buying Soviet food and oil exports, but I get the impression that it wouldn't have been enough.
>We now know that Stalin was planning to attack Germany in 1942 or 43
Do we? My understanding is that most mainstrean historians have firmly rejected theories about Stalin having short-to-medium term plans to invade Germany in late 1941, especially after having the opportunity to examine Soviet archives for evidence in the 1990s. I've heard of some "maybe in 1942 or 1943, but definitely not in 1941" remarks, but I had parsed those as more speculation than being based on any hard evidence of Stalin pursuing plans of invasion.
Britain could have defeated the Axis all by herself, although it would have been a longer and bloodier affair. Assuming Germany didn't build a nuclear weapon first, and it seems like Germany's nuclear program was moving at a snail's pace.
I think if it were just Britain and its colonies and dominions vs Germany and the other European axis powers, then it's probably a stalemate until one side can't sustain a war economy any longer or someone gets nukes. I don't think Britain has much prospect of successfully invading mainland Europe without American or Soviet assistance and Germany has effectively zero prospect of invading Britain.
Reasonable analysis. I'm glad the United States entered the war, but I think it's fine that we entered the war slowly, by gradually putting more pressure on the Axis until the Axis snapped and attacked first. I absolutely don't think we were obligated to join the war in 1939. Would joining earlier have saved some lives? Sure, and it would have ended other lives prematurely. There's no guarantee it would have been a net benefit for America or humanity. (What if FDR forced us into the war in 1939, and the American people were so aggrieved at this that isolationists seized power and dragged us back out of the war before the job was finished?)
I think you're correct. The Soviet plan to attack Germany I believe was little more than a contingency plan wherein the General Staff draws up a strategy to attack just about everybody for use *in the event* that a sudden conflict arose with one of those countries, just so that nobody had to come up with one on the fly when minutes count, like in 1914.. I don't think there was a plan being operationalized by the Soviets, or even close. I could be wrong.
On the other hand if Germany didn't attack the USSR and started losing anyway, I can still imagine the USSR joining into the end of the war anyway because Eastern Europe would, in the face of a German collapse, be free real estate.
Yep, that would have been very likely. Apart from the free real estate like in Manchuria, Kuril Islands, there were plenty of small nations e.g. in South America that declared war on Germany shortly before the end of the war. They were militarily useless and had no intention of sending any soldiers to fight, but it still qualified them for US military help to build their own militaries. Also it qualified them to be founding members of the UN. So yes, plenty of incentives to pick the winning side.
"We now know that Stalin was planning to attack Germany in 1942 or 43" Do you have a reference for that?
If true, that merely confirms what Hitler suspected, and provided the rationale for attacking the USSR when he did. Unfortunately, I don't see a way out for Germany. I don't think that the morale difference between defenders and attackers is as strong an effect as you are implying.
France and Russia were allies, so an attack on one was going to end up a war with the other (because they both know that once their ally is gone, they are next). Hitler believed that he could not attack Russia without being attacked by France on the other front, and France was the weaker party, so he attacked France first.
But France and Great Britain were also allies, and for the same reason. So Hitler believed that he couldn't attack France without ending up in a war with GB, and he was probably correct. Thus, The Battle of Britain.
But you can't go to war with GB without involving the United States. GB was simply too valuable to the US economy for reasons of trade and debt, so the US isn't going to let GB fall.
So the interlocking chain of dominoes surrounding Germany is just too tight. And we all know what happened when they tried to take them all on at once, so...
An expansionist Germany in the 1940's is likely doomed.
> But you can't go to war with GB without involving the United States. GB was simply too valuable to the US economy for reasons of trade and debt, so the US isn't going to let GB fall.
That’s quite the rewriting of history.
Which is common in America - this idea that the US was going to go to war with Germany eventually, with or without pearl harbour. There’s little evidence of that. In fact Germany had to declare war on the US.
I would argue that our cultural and linguistic ties are the reason why America would not ever permit Britain to be overrun. The reason the United States didn't get involved earlier was possibly because American analysts determined that Britain was not facing an immediate, existential threat. Germany's naval power was far too modest to effect an invasions. Germany's air bombing was tragic and disruptive, but again, far short of what would be required to show an existential, immediate threat.
Britain was nearly lost. By the invasion of the USSR most newspapers in the us were expecting a German victory in Europe. And yet sentiment stayed isolationist. This Anglo Saxon alliance was often in churchills head
Nearly lost? Flabbergasting. They possessed half the world. There were millions of colonials ready and willing to fight. The Germans were desperately short of supplies in late 1940, and they were using a lot captured weapons. They didn't have the manpower to protect the land they had seized. Their Navy was a joke compared to the British Navy. “Nearly lost” seems so against all the evidence as I see it, but of course there are historians who believe this. World War II historiography is all over the place.
It was Churchill’s job to win the war as fast as possible, and that required him to tailor his language to a sense of urgency, and to inspire urgency in others. Including Americans.
I would also.not put much stock in what the newspapers were saying in 1940, because for moral, political and economic reasons, much of the upper class in this country absolutely wanted to get us into the war, including newspaper owners. Isolationism was much more of a grassroots thing. My personal take: it would have been fine for the United States to join the war a year earlier. But it wasn't 100% essential for the survival of Britain.
> Nearly lost? Flabbergasting. They possessed half the world. There were millions of colonials ready and willing to fight
This is massively retrospective thinking and even if true in retrospect it wasn’t clear at the time. We - for I am British - had lost the war in the continent by 1940 and there was no D Day possible unless America joined, which it wouldn’t have without Pearl Harbour. That Britain felt it was in existential crisis is clear from diaries at the time, and celebrated in many US newspapers at the time, you are overestimating Anglophilia in the US. Isolationism ran deep.
It was more likely surrender or accommodation rather than if Germany had taken the ussr. The colonists wouldn’t matter then.
Several months before Germany "had to" declare war on the United States, the United States Navy was ordered to engage and destroy German warships on the high seas, wherever they might be encountered.
The US *went* to war well before Pearl Harbor, we just didn't *declare* it. And we knew it was going to take another six months at least to get the Army in shape. so we kept the undeclared war purely naval at the outset.
It was an undeclared limited war. Shooting another country's warships on sight in international waters is usually considered an act of war, especially if it's accompanied by public statements of "Yes, we meant to shoot at your ships, and we'll do it again!"
There was also Lend-Lease, which started 9 or 10 months before Pearl Harbor. International Law at the time allowed for private sale of arms by citizens of neutral nations to countries that were at war, but only if the government policy controlling the arms trade was applied impartially among the belligerents. Allowing private sales to one side but not the other was contrary to provisions of impartiality in Hague Convention (XIII) of 1907, and governments suppling arms and other war materiel "directly or indirectly" to one side or the other was expressly forbidden. The Cash and Carry policy of 1939-40 was crafted to technically comply with the letter of Hague Convention XIII while still favoring Britain and France, since Germany couldn't afford to pay cash on the barrelhead for arms shipments and even if they could they'd have a hard time getting payments and deliveries past the British blockade of Germany. Lend-Lease (and the Sept 1940 Destroyers for Bases deal) crossed the line to where the US was no longer behaving as a neutral power under international law, even if we weren't (yet) actually shooting at the German military.
> It was an undeclared limited war. Shooting another country's warships on sight in international waters is usually considered an act of war, especially if it's accompanied by public statements of
It was in response to the Germans attacking American ships and actually sinking one - which on its own didn’t lead to war. It was also purely defensive. Presumably such a policy, announced or unannounced, applies to China now.
Lend lease certainly showed a bias towards the allies, but it’s not declaration of war either.
I know it’s absolutely against the post war American mindset to believe anything else, because it’s a post war justification of American intervention elsewhere - the arsenal of democracy against all the new Hitlers. To believe then that the US wouldn’t have fought Hitler 1 without the German declaration of war is anathema
I doubt it, although I would defer to anyone with actual military experience. The main two points that jump out:
#1 The thing the Nazi military was great at was blitzkrieg, like, super-fast armored assaults that defeat the enemy fast and hard.
#2 Nazi Germany's big problem is that they're at war with both the US and the USSR. You don't just have to beat the Soviets, you also have to beat the Americans. And yeah, allying with Japan keeps the Americans focused on the Pacific but what's the plan here, that Nazi Germany holds out against the Soviets until 1946-1947 and then the Americans will make peace with Germany, instead of wrapping up Japan and then pivoting to Europe?
Like, long-term, grand strategically, Germany cannot survive a long-term conflict against both the US and USSR. So either you make a stable peace with the USSR (nope, just nope), knock the USSR out of the war with an attack (plausible but failed), make peace with the US (maaaaybe, especially if you sell out the Japanese but I've never heard of anything like this getting traction), or knock the US out of the war (LOL).
I don't know, I have difficulty imagining anybody winning a long, defensive war against Soviet manpower and American manufacturing but this is very armchair theorizing and not an area I've studied too much.
2) Houston has soured a bit but it's still a great city. Houston has two big problems which have grown over time. First, I know I said it's hot, but it's hot and it wears on you. Even by California Central Valley standards, it's too fricking hot. Second, more importantly, there's just no nature. No trees, and that's really started to bug me. I mean, there's Sam Houston forest and stuff but... it's not pretty, it's not Cali or Colorado or Oregon or Arkansas, it's just ugly swamp/scrubland. It's not even gorgeous open desert like El Paso or New Mexico. I'm missing nature a lot.
On the other hand, the entertainment and event options are insanely good, to an extent I think I've acclimated to without taking the time to appreciate. Like...I just automatically get season tickets to the Alley Theatre, Dirt Dogs Theatre, and Rec Room Arts and I'm just booked into seeing 14 solid-to-great plays/year with no effort, it's just a thing that comes up on my calendar on a random Tuesday. That feels very natural and normal to me now but outside of...maybe a handful of major cities that's not a thing. And theatre isn't the primary benefit, it's just like a side thing. The whole indoor events thing is great. Comedy clubs, concerts, sports...I got to see Weird Al in concert. That's just a thing that happens. It's not just that there are good options, it's that you can literally overfill your calendar with good options. Want good funky art stuff, like full size interactive installations? There's the Art Museum, Meow Wulf, and sometimes the MFAH. Even in incredibly niche stuff, you're spoiled for options.
Nazi Germany and Imperial Japan were allies in name only. There was practically no cooperation between them at all, so there was nothing to sell there.
On the other hand, had there been anything substantial to sell, since the USA had a "Europe first" strategy, Japan would have been the more likely beneficiary of breaking that alliance.
On the OTHER other hand, the dominant faction of Japanese leadership was convinced of victory and determined for Japan to die otherwise, so they wouldn't have sold anything even if they could have.
> Nazi Germany's big problem is that they're at war with both the US and the USSR. You don't just have to beat the Soviets, you also have to beat the Americans
Yeah, the US wasn't officially at war when Germany invaded the USSR but it had started Lend-Lease and cut off oil to Japan, not to mention Germany and Italy had signed the...one sec, the Tripartite Pact, officially making the Axis powers. So yeah, the US wasn't "in the war", but it was pretty obvious which way US policy was, and had, been heading.
None of this is a declaration of war and without pearl harbour the US wouldn’t have joined.
By the time Germany was deep into Russia (late 1941) it looked like Germany had secured the continent. If there had been a defense of Europe strategy it would have happened earlier.
The Germans literally sunk an American navy vessel - the Reuben James - without response. 100 lives.
"None of this is a declaration of war and without pearl harbour the US wouldn’t have joined. "
That assertion needs a whole lot more support than you're giving it.
I'm pretty sure that without Pearl Harbor the United States would have waged the same sort of undeclared naval war against Germany that we waged against France in 1798, while building up our Army and (Army) Air Force until they were ready for action. Then found or engineered a Lusitania 2.0 level provocation, or hinted to Churchill that MI6 needed to fake up another Zimmerman telegram stat, which FDR would take to Congress and get a proper declaration of war.
And I can back that up by pointing to the naval skirmishes with Germany prior to Pearl Harbor, and to the policy documents committing the US to working with the UK et al to properly de-Nazify Europe by any means necessary. What have you got?
What i have got is that the US didn’t declare war on Germany, even after Pearl Harbour and wouldn’t have - as the diaries of the cabinet show - if Germany did not declare war on the US first. I like to deal with facts rather than speculation.
Wikipedia says about the Reuben James that "The destroyer was not flying the ensign of the United States and was in the process of dropping depth charges on another U-boat when she was engaged."
It further says about the Neutrality Patrol, of which the Reuben James was a part of:
"Roosevelt's initiation of the Neutrality Patrol, which in fact also escorted British ships, as well as orders to U.S. Navy destroyers first to actively report U-boats, then "shoot on sight", meant American neutrality was honored more in the breach than observance."
American strategy was to enter the war but let the Germans make the declaration thereof:
"The Neutrality Patrols continued through 1941 but were rendered moot by Germany's declaration of war on the United States on 11 Dec 1941. As part of Germany's justification for declaring war, they made specific mention of the Greer, Kearny, and Reuben James incidents, describing them as flagrant violations of any supposed neutrality. "
"The Neutrality Patrols were controversial at the time and remain controversial still. Roosevelt's own Secretary of War, Henry Stimson, believed the patrols were belligerent acts and he advocated Roosevelt to openly say so."
If those who believe that AI is a great risk are sure that they are right, I think they should stop debating in these little niche spaces and instead throw themselves into changing public opinion. Scott and the rest do not sound to me like practical people, and I think they should hire people who are good at practical things without being unscrupulous. The group can work on scaring people about AI doom risk, but also AI slop. I feel confident that there would be impressively awful results from a study like this: Compare toddlers who spend several hrs/day watching AI slop (bright, loud, attention-getting stuff with very little order to it, and by order here I mean order that would make sense to a child age 18-36 months: simple stories with toddler-level drama to it, people acting on motivations toddlers can grasp, stories that have continuity and a recognizable beginning, middle and end) vs. toddlers watching kid vids from an earlier era that have the structural qualities I named above. Pretty sure the slop toddlers would not do as well on cognitive testing. That should get people's attention.
Society is obviously not going stop AI progress regardless of what they say. The realistic outcome has always been to increase awareness of the topic for now and be ready to advise leaders when society inevitably GOES FULL PANIC MODE if AI finally does something actually dangerous. If it's too late to intervene at that point, well, too bad for humanity.
There have been multiple video podcasts with these various personalities debating. I think—this is an honest take—they need to button up their looks. They kind of look funny, and some of them speak funny. People don’t tend to take arguments from funny looking & sounding people seriously.
"Moore's Law is a human law. We double the computer power every two years. But once computers do it, they will first do it in two years, then in one, then in 6 months, then in 3 months, - it's singularity"
Jesus wept. These men want us to take their ideas seriously.
Yes, they need a good-looking and charismatic spokesman. Preferably someone that has no connection to AI research or rationalism. They can be fed talking points from people who are more knowledgeable on the subject.
>Seriously, every-other-day my phone gives me another story about Yud & Soares being wrong.
That's probably because they are wrong. You should be careful to avoid trapped priors in your thinking and use this as evidence to update away from AI being an existential risk.
Good luck with that one. I spilled thousands of words on the topic in these here comment boxes, but trying to explain the basic realities of making stuff to people who have never done it and have 0 interest in learning is draining and I am done.
You know the saying: "It's hard for a man to understand something when his salary depends on him not understanding it", right? But it is 100X harder for a man to understand something when his identity depends on not understanding it.
Yud is an intellectual idiot who haven't done a day of real work and thinks physical world is just like words on screen. No, seriously, go read this gem: https://ifanyonebuildsit.com/6/wont-ais-be-limited-by-their-ability-to-design-and-run-experiments , and weep. The man who knows nothing about running experiments confidently explains how they can be "sped up" if only "smart people" think hard about it. Like, a 1000-hr HTOL test will somehow run in less than 1000 hours?
> But it is 100X harder for a man to understand something when his identity depends on not understanding it.
Yudkowsky's identity depended on the exact opposite of what you seem to think. He founded the Singularity Institute, which wanted superintelligence as fast as possible, and did a massive about-face in response to realizing the risks. In any case, he's mostly irrelevant (other than as a decent writer) given that his underlying position is shared by the top experts in the field (excluding those on Big Tech's payroll).
Name three of these top experts who are not computer scientists or "AI researchers", but have background in manufacturing or similar physical reality-based fields.
Look, I'm not saying computer scientists are "stupid" or anything of the sort. The point is, they deal with 1s and 0's, the bit-flipping field where progress has been relentless and accelerating at an exponential pace, and it's hard to see why it'd slow down. But they tend to handwave the fundamental irreducibly complex reality of the physical world - because they are typically not experts in it, don't have the experience of making things that move, cut metal, grow cultures, etc. Every time I read Yud or AI2027, whatever links folks throw at me, it's always the same - they gloss over the physical world at best, and utterly clueless about it at worst (that would be the "Corolla made of atoms" Yudkowsky, I can't emphasize enough how ignorant the guy is about many subjects he bloviates about with astonishing confidence).
So when they say, "AI will keep getting better", I don't have a strong argument against that (but no AGI in 2027, come fucking on, those chips are in the fab today, what do you expect to happen between now and then?). It's when they say, "and it will kill everyone" that I ask a simple question: "how"? How will it kill everyone? Can you at least try to model this? We know what it takes to kill a human, come on, get some basic modeling done, anything beyond Disney fairytales (literally!), something that an engineer can work with.
Manufacturers have enormous incentive to want to push technological development wherever and whenever they can. I just don't understand why you would think these people would be so impartial. Why should we put these people on a pedestal above theorists? Without theorists, manufacturers would not have new products to build.
I was bringing up experts as a counterargument to things in the categories of "alignment will be easy", "AI will necessarily stop beneath human ability", "it will be nice by default" arguments. Most people, being members of a species that absolutely dominates the entire planet by merit of intelligence alone don't have an issue with the idea that intelligence -> victory, and don't need examples of "how could something smarter than me possibly outsmart me" to get it. (This isn't a counterargument to your point, just explaining why I commented like that.)
Oh, I see. I certainly don't believe "alignment will be easy" or anything like this. FWIW I think "alignment" as a separate thing from "development" is impossible - we will learn how to deal with the system as we are developing it, not by somehow being able to "jump ahead". I'm agnostic as to whether ASI is possible, but I'm pretty confident it won't be here any time soon.
Scott's piece is interesting but honestly adds nothing to the discussion, "AI will bribe people" is entirely possible, but we already have people who want to kill and wreck. I chuckled at the "plan worthy of Napoleon" as if his Russian campaign was something to emulate. Another example, the : "its advice is always excellent – its political strategems always work out, its military planning is impeccable, and its product ideas turn North Korea into an unexpected economic powerhouse" bit is so naive as to make me question Scott's understanding of the world in general. I could go on.
Napoleon was a devastating battlefield commander who crushed army after army for 14+ years! His enemies were in AWE of his generalship, and he had a fairly good rationale for invading Russia. Hate to see this dismissive attitude of a man who helped build the foundation of liberty in Continental Europe.
He does't directly contradict any of Yudkowsky's points, but points to the complexities of the real world, which, as you point out, Yudkowsky sounds kinda ignorant of. Here's an abridged version of Lee's main point:
Yudkowsky and Soares believe that some systems are too complex for humans to fully understand or control, but superhuman AI won’t have the same limitations. They believe that AI systems will become so smart that they’ll be able to create and modify living organisms as easily as children rearrange Lego blocks. Once an AI system has this kind of predictive power, it could become trivial for it to defeat humanity in a conflict.
But I think the difference between grown and crafted systems is more fundamental. Some of the most important systems—including living organisms—are so complex that no one will ever be able to fully understand or control them. And this means that raw intelligence only gets you so far. At some point you need to perform real-world experiments to see if your predictions hold up. And that is a slow and error-prone process.
And not just in the domain of biology. Military conflicts, democratic elections, and cultural evolution are other domains that are beyond the predictive power—and hence the control—of even the smartest humans.
**********
When it comes to the points you made quite a while ago about the absurdity of the prediction we'd have functioning smart robots in 2027, you and practical knowledge absolutely win. You were right, and the writers of that AI 2027 thing look silly. If they didn't know much about factories and robotics they should have consulted someone who did. Lee, above, is making the same kind of point as you about the real world: It has characteristics that will slow AI down enormously and keep it from being an unstoppable force. But you, in your comments about robots in factories, had an actual factoid to rebut the claim with: If robots were going to be functioning in many places in 2027, there would be preliminary versions now in the places that were going to produce them, and the prelims aren't there. But you and Lee, don't have an equivalent factoid to point to in the general case. Yeah, I get that living organisms and military conflicts and political changes are -- what are they called, chaotic systems? -- and so inherently terribly difficult to predict. But I don't see how anyone can say an extremely intelligent future AI could not predict them. Where's the proof? So it's hard to know whether the point the Lee is making is a great common sense insight, or just comes down to a statement that in the world as he knows it certain things cannot be predicted or controlled, and he cannot imagine that changing. So maybe his arguments merely demonstrate his failure of imagination.
So, Fibonacci, I'm not trying to be a clever debater here. This is what I really think. Wut u think of it?
OK, Fibonacci, I get it. Your points are valid. But I am still worried about AI doing us in. So first I’m gonna make sure you see that I get it, then tell you why I’m still worried.
*I get it.*.
There are some things about how the universe works, such as “butterfly priniciple,” that limit what an AI, not matter how smart, could do. Additionally, they are lots of things about how the practical world works that provide a lot stability to the world as it is — or you could think of them as a sort of inertia what would interfere with a supersmart AI swooping in shoving things in a bad direction of its choosing. You’ve pointed to various examples of this, and I ran across one of them myself recently: AI diagnosis of, say, pneumonia via reading of lung images is more accurate than radiologists’ in tests. But in real life AI diagnoses from images are 20% or so worse than radiologists’, because of various real-world lumps and bumps. AI trained and was tested on high quality images from one hospital, on patients who had no complicating conditions that might affect the Xrays. In real life, images vary across different hospitals, some are of mediocre quality, some are of patients with complicating conditions, etc. Also, you need a different AI for each of many body areas, and radiologists would have to pay for and have available dozens of different AI’s to get through a day's work. AI doesn’t work on ultrasounds. And then there are complications having to do with whether insurance will pay for an AI-read image.
*Why I’m still worried*
1)I think Yudlowsky irritates you so much that you tune him out. I agree that he is dumb about the real world, but I don’t feel the personal irritation you do at him. I am used to weird smart people who are dumb about the practical world. And what I see is that lots of them are extremely smart about patterns, including patterns of ideas, patterns of observed regularities that suggest odd possibilities. It would not surprise me at all to learn that Einstein was as naive about the real, practical world as Yudkowsky. But he saw new patterns in physics behind the familiar ones. Some autistic people can recognize a 6-digit prime on sight. It’s a different kind of smarts from yours.
2) I know some of the examples in Yudkowsky’s book, such as the stuff about AI being able to just take over production of various things, can be refuted by pointing out the kinds of “inertia” real world things like factories and weather have. But those ideas are not Yudkowsky’s only reasons for believing AI is a grave danger. And I don’t think superintelligent AI would make the mistakes Yudkowsky did. If ASI was planning some kind of takeover that give it more power than mankind, I think it would recognize the limits of its control of chaotic systems, and the ways practical constraints would slow down various moves. Don’t you? I mean, you and I easily recognize those things. In fact I bet that if I asked GPT 5 right now what would limit AI;s control of the weather, or AI’s taking over manufacturing, it would name a lot of the same things you did. I don’t know what an ASi would do, if it somehow had the goal of ruling the world via controlling resources and having unbreechable defenses against all forms of human attempts to override it. I’m just smart, not superintelligent. But it does seem possible it would think of things you, I and Yundkowky have not thought of, things that would work in the world as it is.
3) Scott and Zvi take Yudkowsky seriously. I don’t know whether either of them knows how to change the oil in their car, but both have succeeded at a number of real-world things: training, employment marriage, kids. You don’t succeed at that stuff unless you can take into account the real world, not some inner version of the world you’ve dreamed up and are impressed to death by.
4) All my intuitions and common sense tells me AI would be dangerous as hell if it had will, goals, etc. Right now, AI is a machine that just sits there inert until we ask it to do something. It can’t learn from experience, can’t learn from direct instruction, can’t “think over” what it knows and find isomorphisms, reconfigure things, have ideas. Seems to me nobody has any good ideas about how to make it able to. Personally, I think further development of AI is going to involve some sort of AI/human hybrid — either AI somehow trained on a developing human being, or AI that uses the brain of human beings bred and used only for this purpose, or human beings that are able to lead human lives, but somehow have continued and instant access to a very smart AI. And I *know* there is no way to guarantee a human being is aligned with the rest of humanity.
5) “The universe is not only stranger than we know, but stranger than we can know.“ What if ASI understands some strange things, things we cannot ever know. With this in the back of my mind, I ask myself about ASI and the limits on predicting chaotic systems: sensitive dependence on initial conditions, + if you measure them extensively you change them. Could an ASI that understands strange truths predict and control a chaotic system. And then I think “yes, by becoming the system.” I know that sentence doesn’t exactly mean anything. It’s not the moon, it’s my finger pointing at the moon. My way of contemplating the idea that there are degrees of intelligence that might permit insights that make impossible-seeming things possible.
Ere, thank you for a thoughtful response. Let's see if I can address your points within a reasonable word count.
1) I don't tune Y. out. I listen to him, and every time he opens his mouth/types words he proves to be an utterly ignorant buffoon. Look, when I was 20, I was at least as "smart" as I am now, but I started working as an entry-level engineer. Why? Because I knew nothing about engineering even though I was very smart. Well - Y. never EVER even started as an entry-level anything, he never learned anything about the subjects he bloviates about. Let me give you a perfect example, and if that doesn't illustrate how stupid the man is, I don't know what else to do:
He talks about ASI "taking our atoms" because it needs it (see https://www.youtube.com/live/6yQEA18C-XI ). First of all, atoms? Seriously? Atoms?!! Does he think that a mix of H2 and O2 is the same thing as H2O? That the ASI is so fucking dumb as to take everything in sight, without regard of what it is and somehow take it apart down to atomic level? Do you not see how insanely stupid this is? But let's keep going, the man just started showing his ignorance. When asked a perfectly good question, "why our atoms, we are not made of anything special, these atoms are abundant all around us", he ends up giving an example: ASI can burn you to release the energy.
Burn! Ere, a human is 70% water, we don't burn, it's impossible to burn a human without massive amounts of extra fuel, go ahead, buy a pork shank and try igniting it. How.... dumb does a man have to be to use this as an example of how ASI(!) will use humans.
But forget that: he thinks that an ASI won't be able to recognize the value of a complex low-entropy entity that a biological being is and just wreck it, wasting incredible amounts of energy (how do you think you get H and O out of H2O?) to destroy a self-replicating supercomputer and a super-robot in one package? This incredibly smart thing won't understand how valuable humans are to it? It won't be able to use them to achive its goals? And won't instantly understand (remember, it's an ASI), how best to use humans, that they fulfill their potential best when they are happy?
The man is a truly special case of knowing much and understanding nothing.
2) let me just say first that ASI doesn't exist, and it's not at all clear such a thing is even possible (we don't even know how to define it), and it basically is another way of saying "a fantastic being that can do anything", i.e., God. Or Satan, we don't want to discriminate. But in any case, no, it can't control chaotic systems no matter how smart it is, and it can't shortcut computational irreducibility, no more than it can break the second law of thermodynamics (which is actually an expression of computational irreducibility according to Wolfram).
3) Scott and Zvi are very smart, but, as we established in 1), that doesn't make them experts in anything. Scott, for one, has holes in his epistemology that a Mack truck can drive into, and yet he refuses to recognize them. See, for example, his debate with Skolnick about schizophrenia genetics, which was so bloody disappointing it forever reduced my opinion of him. But then he's a trained psychiatrist, why would I overweigh his opinion on tech vs., for example, the guy who started three robotics companies: https://crazystupidtech.com/2025/09/29/irobot-founder-dont-believe-the-ai-robotics-hype/
4) actually I don't disagree with this one. AI is dangerous because it's a powerful tool, and as any powerful tool it can be used for good and evil. My objection is to the specific fantasy of AI getting up from its bed and murdering everyone.
5) No amount of "understanding" can predict behaviors of real world. This is again the computational irreducibility of the universe: if it's impossible to predict what an Nth line of Rule 30 is going to look like without going through the full computation up to the Nth line, what hope is there to predict how, for example, a viscous fluid will behave? By the way, the simple-looking Navier-Stokes equations that describe it are generally unsolvable, ASI or not.
And again, "ASI becoming the system" is... look, please don't take it as an insult, it is a "profundity", a deep-sounding meaningless statement that reflect a religion-like view of ASI, basically equating it with God. Worrying about ASI destroying our world is at this point pretty much the same as worrying about God destroying our world, and should have about the same level of actionable response, i.e., do nothing different.
I’m not trying to have the last word, really just signing off with a few last comments — though feel free to come back at me if you want.
About various idiot things Yudkowsky has said, eg AI will disassemble us for our atoms, or use us as torches. I agree they are dumb. I could have seen what was wrong with those statements when I was 16, based on high school science. That does not disqualify him in my mind as a judge of what to expect to happen as AI development proceeds, because it seems to me the situation is so profoundly novel that scientific and practical knowledge may not be that helpful. They certainly are helpful in thinking about how AI of the present and near-future era will affect manufacturing, health, the economy, etc. I’m talking about the bigger and more general question, which really has 2 parts: What will AI 30 years hence be capable of? And how likely is it that it would do our species great harm either intentionally or unintentionally?
What 30 years hence AI will be like seems like a very difficult and challenging question, one that calls for deep and original thought about how the human mind works, how computers and machines work, what kind of modifications are possible to the type of AI we now have etc. It might call for a new paradigm. I thinking here about the 2 smartest things I’ve ever understood — Wittgenstein on language and mind, and relativity. I doubt that either Einstein or Wittgenstein was a bit practical, or gave a shit about practical knowledge. They had minds that called up huge and novel patterns and played around with them. Some of what they would have had to say during that process would have sounded naive and dumb to most people: “What if space and time are sort of like the warp and woof of one piece of fabric?” So when I worry that Yudkowsky is right, I am not thinking at all that he is better than you at smart practical science-based thinking — I’m wondering whether he is doing genius pattern matching. He is obviously autistic. Maybe he’s doing the equivalent of recognizing 6-digit primes.
I agree that “super-intelligent AI” is used as though it means God, and actually it doesn’t mean anything. But I think the idea that AI 30 years from now could do astounding things, things that depend on paradigm changes, is not absurd (though also not guaranteed). That’s why I refer to it above as future AI, no ASI.
And there was one time when you skated right over something I’d said:
I said: “I ask myself about ASI and the limits on predicting chaotic systems: sensitive dependence on initial conditions, + if you measure them extensively you change them. Could an ASI that understands strange truths predict and control a chaotic system. And then I think “yes, by becoming the system.” I know that sentence doesn’t exactly mean anything. It’s not the moon, it’s my finger pointing at the moon. My way of contemplating the idea that there are degrees of intelligence that might permit insights that make impossible-seeming things possible.”
Your response:”And again, "ASI becoming the system" is... look, please don't take it as an insult, it is a "profundity", a deep-sounding meaningless statement that reflect a religion-like view of ASI, basically equating it with God.”
I *know* that. I *said* that, I literally said “I know this sentence doesn’t exactly mean anything.” What I was trying to point at was the idea of paradigm shifts in science and physics, and the idea that future AI might be capable of some shifts that made possible things that now appear utterly impossible. If I were to take it a little further, I would say that maybe chaotic systems are like space or time, and if you change the paradigm so that space and time are warp and woof of a single fabric new possibilities open up. NO, of course I am not sure of that. NO, I do not have a new paradigm in mind. NO I am not convinced that future AI would be capable of coming up with a profound paradigm shift. I’m just reminding you that such things happen, and that it makes sense to consider that a future artifiicial intelligence would be capable of such a thing.
PS I appreciated your nail gun comment to that Wormwood creep.
1. Lee still(!) understates the unpredictability of the world. He talks about complex systems, but the thing is, even simple systems (example: https://en.wikipedia.org/wiki/Rule_30) can display unpredictable behavior, meaning that the only way to know what the system state will be in the future is to go through its operation, there are no shortcuts. This is what Wolfram calls "computational irreducibility", the impossibility to "jump ahead". For example, think about why a 3 nm process node was not available 20 years ago. Why did we go through 350, 180, 120, ... 22, 18, 11 nm first? We did know all the advantages of going to 3 nm, but we couldn't go there without going through motions of shrinking the transistor one step at a time. Leaping ahead was impossible even though we knew exactly what would be needed to do this.
2. Specifically to "chaotic" systems such as weather: the reason no amount of "intelligence" or "compute" can make predictions beyond certain point is because the state of the system at a far away point in time is dependent on the initial conditions to such a degree that even a tiniest fluctuation in them results in a massive change at that point in time (the "butterfly effect"). This means it doesn't matter how advanced the algorithms are, how smart the AI is, it's predictive abilities are not gated by intelligence but by accuracy of starting data.
And those data come from real-world sensors, and those are expensive and slow to make and need to be installed. And then things ger worse once active agents impact systems, all predictability goes out the window.
These fundamental limitations appear to be utterly ignored by Yud et. al., as if more lines of better code can spin weather sensors out of the ether and place them at every square foot of the planet.
> when his identity depends on not understanding it
"AI Doomer" is not Yud's identity mor any of these people's identities. They have careers and lives beyond this. And many of them would happily just go back to "boring old AI development" if given the chance.
> Yud is an intellectual idiot who haven't done a day of real work
He worked at Bell Labs. Or do you mean Real Work as in "fixing a toilet"?
> The man who knows nothing about running experiments confidently explains how they can be "sped up" if only "smart people" think hard about it.
These types of arguments remind me of the "God of the Gaps" arguments in the debates between Christians and Atheists. "Science can't explain this, Science can't explain that" But then that it does. So Defenders Of The Faith retreat to another unexplained "gap" in our understanding of the cosmos.
Well, replace "explain this" with "speed this up" and we basically have the same argument. "A, B, C, D, X, Y, Z can never be sped up!" (until they are)
Protein-folding was an impossibly complex problem, until AI researches solved it, and then found more protein-shapes in a few years than the past half century. By a factor of a 1000.
I really wish people like you would come over to our side of the fence and notice the Gigantic Grizzly Bear rapidly bearing down on us.
No he didn't, lol lol lol. His FATHER worked at Bell Labs.
Yudkowsky hasn't worked a day in his life. Never mind that, he hasn't even formally studied anything, you know, in a setting where one has deadlines and standards and a professor can tell you that your work sucked and you have 48 hrs. to resubmit.
That guy. That guy wants to bomb datacenters that he wouldn't know how to turn on if his life depended on it.
the man is a published author. Writing is work, and judging by the audience he's amassed (and my own opinion, although you may disagree - still, no author is everyone's favorite) it's work he's quite good at.
Yes, he has a gift of gab. He’s good with words. But that doesn’t make him an expert in anything else. For example, “Corolla made of atoms” is a grammatically correct snippet that is also meaningless drivel as far as modeling reality is concerned.
He’s a Greta Thunberg of AI, understanding little and pontificating much, turning the field into a clown show. She had her “how dare you”, he has his “everyone will die”, with about the same convincing power, i.e., asymptotically zero.
Notice how all these impossibly complex problems are related to "knowledge": we didn't know how to solve protein folding, now we do. We didn't know the structure of human DNA, now we do. The LLM didn't know how to write MATLAB code, now it does.
Now what?
What are we doing with this knowledge? Where are the amazing genome-matched therapies we were all promised?
Well, they are probably still coming, but using the knowledge to create real things takes time and effort that cannot be compressed beyond very strong limits no matter how smart the people/machines become. Some experiments can be paralleled - at massive cost, mind you - but they cannot be shortened, see my HTOL 1000-hr test as an example. And the whole point is that the outcomes of the experiments cannot be predicted, which is why we run them.
FFS, we can't even predict a performance of a basic IC without models derived from experimental data!
We advance real world progress one experiment at a time. The physical reality is fundamentally un-modelable from any sort of "first principles", it's irreducibly complex and we can only create occasional short bursts of computational shortcuts, not infinitely self-improving loops.
Your comment implied that there were no good examples in the linked document. The poster is making the bare assertion that no, the human genome project is a good example.
Reading your post, you say "knows nothing about running experiments", so I presume that's an example of a well run experiment that was under budget and data efficient.
Oh okay, I didn't realize that's an idea that AI-educated people took seriously. I assumed that what people are worried about is AI+the internet confusing reality for most people to the point where it's a huge problem, or something along those lines. I feel like that idea is pretty realistic, but then again, that's just a feeling based on my minimal use and knowledge of AI.
"AI+the internet confusing reality for most people"
Apparently, if I believe media articles, it's happening already. People are using chatbots as their romantic partners, therapists, and marriage guidance, and since AI is that oleaginous agreeable reinforcement dialogue partner (you are so smart! you are right! anyone who disagrees is in the wrong!) this is having devastating effects.
Granted, if you're asking AI for advice, your relationship is already rocky, but the AI is being used as a bludgeon with "see, I'm right, the AI agrees with me and we all know AI is always right".
"Even Geoffrey Hinton, a Nobel Prize-winning computer scientist known as a “Godfather of AI” — a technology that likely wouldn’t exist in its current form without his contributions — recently conceded that his girlfriend had broken up with him using ChatGPT.
“She got ChatGPT to tell me what a rat I was… she got the chatbot to explain how awful my behavior was and gave it to me,” Hinton told The Financial Times. “I didn’t think I had been a rat, so it didn’t make me feel too bad.”
This is just reinforcing my pessimism about how AI will destroy us - not because it becomes super-intelligent and self-aware, but because we humans are stupid and we will happily hand over decision-making ability to the machine because it relieves us of the work of having to think for ourselves.
I'll be somewhat more than my usual pedantic and say your pessimism here might be misplaced. For one thing, Hinton apparently never believed what ChatGPT said. For another, the narrative in progress here is that Hinton's girlfriend made a mistake in trusting ChatGPT, and that mistake was so big that it might justify them breaking up anyway.
If that's the case, then AI was a benefit here. Who knows how many terrible-but-hidden matchups it could spot and prevent? Imagine all the broken homes and traumatized children that never come to be!
Thirdly, Hinton is 77; assuming his ex-GF was about as old, then frankly, I'm not sure how much was going to result from that relationship either way. Admittedly, that take some wind out of the sails of my second point, but if you're concerned about younger people making the same mistake, then the wind goes right back in.
Tons of experts voiced concerns about aii killing everybody. The usual names are Geoffrey Hinton, Yoshua Bengio, Demis Hassabis.
If you must decide using the advocates' prestige instead of evaluating the arguments yourself, I'd still go with them instead of anonymous substack commenter who "knows the realities of making things".
Wake me up when, for example, a semiconductor reliability engineer publishes a doom scenario showing how AI can run a fully automated fab recursively improving yields and throughput.
"Prestige" is not working for us though, because it's dwarfed by the Institutional Prestige of magazines and media outlets that are against us (even if the particular articles are written by Joe from Sales).
The "Prestigious Names", imo, need to Get Out There And Verbally Debate.
This war of strongly-worded blogposts is unwinnable.
Can anyone recommend a good news site that outlines which news sources are reporting an event, separated by a "right - center - left" categorization? I already know about Ground news, any others?
I only now about ground news and only from their marketing. Have you tried it and were disappointed? If so why? I am conceptually interested in approach but have never tried any myself
I have tried it, and while I'm not disappointed per se (it does what they claim it will do), it doesn't do everything I might want. Ground news seems configured to present it's users with a specific event or news story, and then break down who is reporting it. It doesn't really have a time-efficient way of laying out what the differences are between right - center - and left reporting on these stories, so I am interested in what other sites might be offering, and if there is anything even better.
Going to try to quit Substack to finally read that book of feminist theory. When I'm back I'll be trans, a redpill misogynist, or dead by my own hand. Wish me luck!
From experience across several decades of tech jobs, and if you are anything like me, at the end of the first day (or first few days), you will absolutely hate the place -- the work is incomprehensible and there's no useful guidance anywhere, the people are standoffish at best, there has been *no* consideration given for supporting you in your work (like actually arranging for you to have a computer, chair, desk, phone, or whatever -- let alone training or any anything else "touchy-feely" like that).
That's slightly exaggerated, but only sightly -- each of those things (individually) has actually happened to me personally. (Yep, even starting a programming job: me "what computer can I use", them: "er...")
Thing is: many of those jobs I liked or loved after a while, while a few turned out to be bad mistakes for me.
What I'm trying to say is that feeling early-on that you should never have taken the job, or that you'll never be suited to it, is (at least in my experience) normal, and to be expected. It provides no information (doesn't shift the Batesian credences non-trivially) about the your future in the jobs. Above all *GO BACK NEXT DAY*. The marginal cost of sticking it out for a few more hours/days is very low, the expected payoff is very high, but (if you're like me) if won't feel like that.
(Of course, none of that applies in cases of *actual abuse* -- but that's never happed to, or near, me so I can't comment further)
For some reason the above has turned up as a top level post despite me entering it the "reply" box to "Ebrima Lelisa". Sorry about that. I shall not attempt to use Substack again...
This is a common bug that happens when commenting using the app, it's probably not your fault. Commenting on an actual laptop almost never messes up in this way.
1) Is it conceivable that consciousness is a evolutionary contingent outcome?
That is, given a sufficiently complex system, organic or inorganic, capable of displaying intelligent behavior, learning, playing chess, composing poems, the whole hog, but still entirely unconscious, this is in fact the norm and what might be expected, and that we are conscious, is actually an evolutionary accident, unlikely to ever repeat?
Particularly, if consciousness itself is a language construct, as Jaynes suggests.
2) Is it possible that simulation can not capture everything that is in the thing or system we are trying to simulate?
For physics only captures metrical properties of things, and this leaves a possibility of non-metrical aspects of things, first of which is the existence of the thing itself.
Now, simulation, even in principle, can only simulate, what has been put into numbers. That it, it only covers metrical aspects, with non-metrical aspects outside the realm of simulation.
Now, consciousness is possibly related to non-metrical aspects of conscious matter. If so, simulating matter, howsoever faithfully, even atom by atom, will not capture consciousness.
For consciousness is inherent in the conscious matter such that no simulation can yield consciousness.
Some behaviours just don't make sense without consciousness, so there is absolutely no chance humans are the only conscious beings.
However, consciousness might still be an evolutionary accident. Some people hypothesise that it's almost like a parasite that is actually worse for survival long-term, because the environment cannot properly adapt to conscious decisions.
1) Conceivable, but extremely unlikely, given what we know about how the brain works. Conscious deliberation appears to serve a critical role in learning from phenomenal experience in the moment. It's the function in the mind that assigns an emotional tag to physiological reactions, and that in turn organizes the long term memory. It is the gatekeeper to the self-concept, also stored in memory, which guides future behavior. Seemingly, a non-conscious entity would not be able to carry out those functions.
2) If the simulation is good enough, it should simulate everything, given that everything in the universe is governed by material causes. I suppose you could argue that a simulation that produces the actual thing isn't a simulation anymore, rather the thing itself, but that seems like semantics.
But the point is that cognitive processes seem not to lie in the physical structure of the neurons themselves, but in the patterns of information exchange between them. Given that, anything which reproduces those patterns should reproduce their outcomes.
Even if we know how brain works, this is far cry from knowing how and why of consciousness--the hard problem, as you must know, it is called.
Your point about simulation is what I am challenging. Please note how one models. We leave out something, and even a most exact model necessarily leaves out something. A simulated hurricane is not a hurricane. A simulation of neurons is not and can not be neuron itself. It must, by the very defintion of model, leave out something.
All physics model do this. Maxwell's equations leave out the actual experience of electricity and the actual physical objects which are replaced by numbers.
You wrote: "That is, given a sufficiently complex system, organic or inorganic, capable of displaying intelligent behavior, learning, playing chess, composing poems, the whole hog, but still entirely unconscious, this is in fact the norm and what might be expected, and that we are conscious, is actually an evolutionary accident, unlikely to ever repeat?"
I don't have to solve the hard problem of consciousness (that is, I do not have to demonstrate how it develops) in order to outline what functions it serves. Since it appears to serve critical functions within the mind, I propose that no organic system can evolve human like intelligence without an actual consciousness. AI does not disprove this assertion, because a human like intelligence designed it. I'm making a point about evolution in nature. For that to happen, those functions would have to be served by some other cognitive process. Which one?
As for simulation, you were asking if it might be impossible to simulate a system under study. For practical reasons, yes, it may be impossible to reproduce a hurricane perfectly on contemporary computers. But I thought you were asking conceptually, is it in principle impossible to accurately simulate a system. In theory, no, it shouldn't be. Provided you have precise date regarding the relationships between every element in the system, and you have enough computational power to run those relationships over time, there is no theoretical barrier to reproducing that system perfectly. Whether we could ever, in real life, do that for something as complex as the human mind is a different issue.
Now we are getting into semantics, as I warned. When scientists create a model of a real world phenomenon, like a hurricane, what they are attempting to do is test whether or not the real world phenomenon follows a mathematical model, and how closely. These models are based on various theories about how the real world phenomenon works - in the case of a hurricane, how air molecules exchange energy over time in a space. If the model is based on these theoretical ideas, and the model behaves in a very similar fashion to the real world phenomenon, then this is take as evidence that the theories are close to being an accurate description of what is happening in the phenomenon (the hurricane). Models like this are never 100% predictive, because we have only incomplete data regarding the actual behavior of air molecules in hurricanes. But what if we had 100% precise data of every quanta of energy exchanged by every molecule? Then, in theory, the mathematical model, and the hurricane, would behave precisely the same. It is probable that we can never do this, because we can never get that precise with our data, and our computers do not possess the computational capacity to run the model if we did, but there is no a priori reason why a simulation cannot come arbitrarily close to the real thing.
This is why artificial general intelligence is a concern. In theory, there is no reason why a more precise model of the human mind than we currently have could not reproduce various cognitive processes going on inside it, including conscious self-awareness. I do not believe that we are anywhere near producing that outcome, but I can't think of any reason why it would be impossible.
1) It is conceivable, but how would you falsify this?
2) It is true that science by definition can only capture what we can measure (metrical), and by extension, physics as a branch of science can only capture what we can measure.
I'm not sure the same is true for simulation - e.g. simulations could capture non-metrical properties by accident.
I think you are confusing the map with the territory. A simulation could, in principle, be conscious. If you are correct that consciousness is not measurable, then it follows that we have no way of knowing that our simulation i conscious (at least by science). It does not follow that the simulation does not cause consciousness by some unknown mechanism.
Well, one can not rule out that a simulation may capture non-metrical properties by accident. So, it can not be ruled out that a simulated human could be conscious, But nothing guarantees it.
That's all I am saying. You can have a perfectly simulated human in silicon, behaving pretty much like an ordinary human, at least for short duration, and who would be perfectly non-conscious.
>Well, one can not rule out that a simulation may capture non-metrical properties by accident. So, it can not be ruled out that a simulated human could be conscious, But nothing guarantees it.
Agreed, but who is arguing that it is guaranteed? I think we can't know one way or the other - though if you have a simulation that gives output exactly like a human, it seems conceivable that it might be conscious. After all, the only evidence I have that other human beeings are conscious is that they are similar to me. You clould well all be p-zombies!
Also, if you believe in a mechanical world governed by some underlying rules, there must be some replication of a human brain that would produce consciousness.
>This is what I would expect.
Why would you expect that? I think we have no idea how consciousness is produced, or really even a good grasp of what consciousness is!
It's also possible that gravity is the suction of the eternally hungry earth's core, craving to pull us into its mouth. But lots of other things about how the universe works strongly support a different model of whattup with gravity.
Generally I'm very suspicious of non-physical explanations for physical phenomena. If something non-physical caused consciousness, how would that thing "interface" with the physical world of brains and neurons? Maybe we simply don't know enough yet, or aren't smart enough, to have a useful model of all brain functions that is amenable to simulation. That doesn't mean we will never have one, and it certainly doesn't mean that we *can* never have one.
"Watts himself dismisses the idea that humans have free will as a "farce" unworthy of serious debate. "I don't have much to say about it because the arguments seem so clear-cut as to be almost uninteresting. Neurons do not fire themselves. [...] The switch cannot flip itself. QED." "
He can not be sympathetic to the view sketched here.
If by free will you mean that your brain takes input, processes it, and then makes decisions that have implications in the real world, then I think humans have free will.
I have come to think there is no real paradox between this and a mechanical world view, it's just at a different level of abstraction. The neurons may fire in an entirely mechanical way, completely determined by physics, and the effect of this is thoughts arrive in the brain, and decisions are made, i.e. free will.
The alternative is that there is something outside the physical world making the decisions. That seems to lead to all sorts of problems. Probably that is the (naive) version of free will Watts is dismissing.
Different definitions of free will require freedom from different things. Much of the debate centres on Libertarian free will, which requires freedom from complete causal determinism (and therefore, freedom from inevitability)
(Watts seems to be requiring fully uncaused causes -- neurons that fire for no reason at all -- rather than not-fully-deterministic causes).
Compatibilist definitions of free will only require freedom from compulsion, and allow free will to exist in a deterministic universe. Sam Harris believes free will is a form of conscious control.
Libertarian free will has sub-varieties.
One is "contra causal" free will, which requires freedom from physics, on the assumption that physics is deterministic. This is often connected with the idea of a supernatural soul, that is able to override the physics of the brain. In contrast, naturalistic libertarians seek to find free will within physics, by rejecting physical determinism; they regard indeterminism as a necessary (but perhaps not sufficient) condition of free will.
Thanks, I really should get more into the litterature on this.
I think what I'm saying is I agree with the compatibilist definition, and reject both the versions of libertarian free will.
I think the more interesting rejection is the naturalistic version (given that I understand it correctly). What does it really mean to have a choice in this sense, within physics? So first I think it requires that there is somehow a counterfactual. You could turn back time or there is an alternative universe where a different choice is made (or at least this is imaginable). I think that implies that given the same input and the same exact state of the brain (including the sum of all knowledge up to that point) your brain could decide differently. But this only seem to imply that there is some random component to your decision process. If that is the case, I think it is not really what most people think of when they say free will. So really, the naturalistic free will looks like an illusion, or no more free will than the compatabilist version. So my conclusion is that the naturalistic definition of free will really requires accepting the compatabilist view.
Naturalistic libertarians talk about torn decisions , where you have fairly strong reasons to do more than one thing. An undetermined choice between two things guy have decreasing to do cannot leave you doing something for no reason.
This point is explained by the parable of the cake.
If I am offered a slice of cake, I might want to take it so as not to refuse my hostess, but also to refuse it so as to stick to my diet. Whichever action I chose, would have been supported by a reason. Reasons and actions can be chosen in pairs. In the case of the cake argument (diet, refuse) and (politeness, eat).
"If by free will you mean that your brain takes input, processes it, and then makes decisions that have implications in the real world, then I think humans have free will."
That is not what free will means. That's just will alone. Free will means that some component of the will is free from the chain of cause and effect. It originates behavioral sequences, or, put another way, it is an original source of a causal chain.
However, others have defined free will as the source of cognitive outputs that cannot, even in theory, be predicted by any means available to us, and yet manifests a structure such that it isn't random either. Nonlinear systems are one such example. If the consciousness is a nonlinear system then it might appear free from our point of view.
I think that depends on your definition of free will. Though I disagree that your definition of free will is a good one, considering what is normally implied by free will.
As I argue in other comments, by your definition of free will there is no free will. But people often imply this has condequences.
For example people will often argue that if there is no free will, then someone is not responsible for their action. I believe this is a false conclusion given your definition of free will. Choices that have real world consequences are very much made in your mind, and it makes perfect sense to hold you responsible for your actions even if things could not have been different. After all, I think saying that things could have been different is either a rejection of entropy or the flow of time or an endorsement of some form of parallel worlds theory, that require that there is some random component. It seems that we can't turn back time - which implies that everything that happened (in this world of you believe in parallel worlds) must have happened exactly like it did in fact happen.
It's the definition historically used in philosophical debates about the issue. What the implications are, whether people are "responsible" (whatever you think that means) for their actions or not, or whether free will is possible under the casual assumptions of positivism, are not germane to the definitions itself. Besides, it's not the overall concept of free will that I am disputing, it's the use of the word "free". No one disputes the existence of human will, but whether that will is free. Free of what? External causal forces. If the brain is not to some degree free from external forces, then it's behavior isn't "free."
I have a feeling that if I pulled a bag over his head, locked him up in a cellar, tied him to a chair, and started torturing him he'd believe fast enough I had free will and that replies of "Sorry, Pete, I can't help myself, I can't choose to behave otherwise, this is all the complex interplay of atoms bouncing around mechanically in the meat standing before you" wouldn't cut any ice when he was begging me to stop.
Now, on a grand theory scale of "since the creation of the universe this was all foreordained due to the inexorable laws of nature", fine, sure, no free will.
On the level of "You are sticking a gun in my face and telling me hand over all my financial information or you'll blow my brains out", we certainly believe in free will, else why go to court over this? We don't prosecute rocks for falling downhill in obedience to the inexorable laws of nature and breaking our car windows.
Prosecuting rocks wouldn't make future rock falls less likely. We do, however, preemptively imprison (fence/concrete) or execute (blast) rocks that might fall to avoid that. People are predictive systems, so prosecution and its aftereffects presumably do help avoid future crime, as does revenge, in many cases.
You ask the person to stop because that might cause them to stop. There is no free will involved. Screaming for mercy is simply another external input having a deterministic effect on their behavior (the victim might not be able to predict that effect, but that's not germane to the argument). If the torturer fails to stop, that isn't free will, that's just other external inputs (the mutation that turned them into a psychopath) having greater weight when the behavior is produced.
As for what causes us to believe in free will, that's another issue altogether.
I dunno, Deiseach, if he was really brave and snotty he might creep you out pretty good by observing that he knows you couldn't choose to do otherwise than to say "Sorry, Pete, I can't help myself, I can't choose to behave otherwise, this is all the complex interplay of atoms bouncing around mechanically in the meat standing before you."
Per Aquinas, stones move by necessity, sheep by instinct and people move freely.
I never fully understood this. Does it mean that there is not a dichotomy between people/non-people but rather a trichotomy of non-living, living and people.
>On the level of "You are sticking a gun in my face and telling me hand over all my financial information or you'll blow my brains out", we certainly believe in free will, else why go to court over this?
One could reframe this as "we certainly believe that we can't predict the future for certain." A somewhat nebulous, untestable concept like "free will" need not enter in any fashion.
Not sure I agree. I think there is a potential phenomenon to be explained (a mind that causes it's own behavior) and a metaphysical explanation (souls or essence or what have you). But a brain that has the capacity to originate its own behavior, apart from the material chain of cause and effect in the universe, isn't a metaphysical concept, it's a scientific one (albeit one that might not exist).
I'm starting a new tech job and I'm terrified. This is my first full time gig. I graduated and looked for months before finding this job. I'm terrified that I'm going to mess it up.
Adding to my other reply (which Substack has unfortunately buried at top-level):
Take notes. In real-time, not afterwards. The notes themselves are likely to help more than you'd expect (unless you are habitual note-taker anyway); being *seen* to make notes will give a good impression in many ways.
Second this, but both real-time and afterwards. At every job I had, I kept a text file filled with "how to do xxx", and it was super useful. Also, if you're committing bug fixes, log them all in a similar file, also can be very useful.
As new to the company AND a new grad, you will be in a wonderful honeymoon period (~6 months) where literally no question you ask regardless of how basic or inane will be counted against you. Stuck on something for more than an hour? Ask. Want sparkling soda instead of those gross energy drinks? Etc. Btw, after 6 months you will be expected to know at least some stuff so get those stupid questions in early.
You are not going to know everything the first day, and nobody will expect you to do so. Ask for help, people will understand you are new and have no idea as yet 'how things are done here'. Be as helpful as you can be, try and keep out of office politics, and good luck!
I'm starting a new tech job next week and I'm terrified too, even though I've been doing this for 20 years. Be a person, do good work, and don't be afraid to take criticism. Also, the fact that you're here puts you at an advantage: you can ask us things like "is this normal" or "what am I doing wrong here" and you'll get a bunch of answers.
I was in the same boat pretty recently, and have (so far) managed to make good on it. Word of caution that every job, company and person are different, so there's no guarantee that all of this will apply to you, ways that might be helpful to think:
1. Most professional jobs expect a modest acclimatization period in which you're still learning your way around the job and the business. Don't stress out if you can't do everything at once, and don't be afraid to go to your supervisor or coworkers to ask question or request help.
2. Make sure to be respectful of whoever your immediate supervisor is. I don't mean "respectful" in the sense of bowing and scraping, I mean it in the sense of giving due consideration to how the things you do impact them. Things like being upfront with them about problems or delays that could reflect on them, keeping them in the loop if you have important discussions/collaborations that they're not party to, and not complaining about them to others in the org (even if they deserve it). If they're a good boss, this should just be basic decency. If they're not, think of it a set of survival skills: whatever they're faults, you'll still be better off it you don't make them mad at you.
3. Beyond 1 and 2, just generally try to do good work. In a good, well-functioning organization, good work will be noticed and rewarded. In a more dysfunctional organization it may not be rewarded directly, but developing the habits and skills of good work will make it easier to move to a better job, especially if you can produce anything tangible (like code in a portfolio) that demonstrate it.
4. Keep on eye on the exit, even if you'd prefer not to have to use it (this feels weird to write because I really like my current job and employer, and hope to stay there for many years). Every job is a business arrangement and you should generally expect that your employer will be perfectly willing to end it if that's what's best for them. So you should keep some of the same attitude. My understanding is that working for even a few months at a tech job will make you considerably more employable elsewhere, and you will likely be improving your skills quite a bit at first. Even if you don't intend to apply to anything, try to keep an up-to-date resume/CV and take at least occasional glances at relevant jobs postings[1] to have a sense of what your options are and how difficult it would be to pull up stakes if you need to. Also, psychologically, keeping in mind that you *can* leave can also help take some of the edge off stressful situations, and its easier to do good work if you aren't stressed.
[1] But *don't* do it or mention it at work. Lots of people are reasonable about it, but some employers will take it the wrong way.
What a beautiful milestone. A few things to keep in mind:
1. Nobody there knows everything. But even the CEO. All companies are collections of incomplete knowledge and inefficient systems.
2. Always do two things:
(a) your core job: what’s on your job description or your contract. You will be asked to do a multitude of things outside of that, and you’ll do them (because you want to be a good colleague, and because you want to learn, and because you don’t yet know what you’re going to be good at) but don’t let all those other tasks distract you from getting your core job done.
(b) pay attention to where you can add value over and above your core job. You’ll get some credit for doing exactly what you were hired to do, but you’ll get more for seeing things you weren’t hired to do and doing them too.
3. Yes, it’s scary, because you’re going to get paid next month and the month after that and the month after that, and you’ll be afraid that you’re not earning that money, but know that every day you’re learning things (even things that seem tiny and insignificant) that are valuable to your current employer – and especially valuable to another employer further down the road.
4. Jobs, gigs, projects, opportunities come and go all the time. You don’t know where the next one is going to come from, but I promise you it’s going to come.
5. You will face situations that you hate, that you aren’t sure you can manage, and you will wonder how you’re going to get past them. And then they’ll be gone, and something else will replace them, and everyone (including you) will forget about the thing that seemed insurmountable.
Just remember that the vast majority of people are not actually as good at what they do as they originally seem. Take pride in fully comprehending your responsibilities, take ownership of them, *just do things* instead of timidly wondering “what if?”, and you’ll be better than 75% of workers. Once you realize you are fully capable of doing nearly anything at all with a good ol’ college try you’ll start doing even more, and then you’ll be better than 95% of workers.
I'm in a similar boat; today was my first day, actually. I think the biggest idea to keep in mind is the fact that you were hired by your employer. They know that you're new to the game, they've factored your inexperience into their calculations, and they still decided it's worth it to hire you, because you'll learn and grow as they show you what your job consists of and what they want from you. Figure out the actual boundaries you have at work with your supervisor and your team, find out what your best resources to learn are (people or otherwise), do your best with what they give you, try not to delete the repo and all the backups (or other mistakes of that magnitude), and you should be fine.
I signed up for Sleep Sheep the day Scott posted the link to an open thread. I was pretty desperate, feeling dysphoria from a number of particularly terrible nights’ sleep. What follows is my review as best I can craft it with a brain that is rationing sleep opportunity time.
The best thing about Sleep Sheep is that I’m doing it. I’d known about CBT-I for some time. I’d known it was the gold standard, that it involved rationing sleep, and I’d vaguely even planned August-October as the time to do it since my wife is on a series of vacations, and it’s easier for me to tinker with lifestyle choices while she’s away. I’d nonetheless remained fairly horrified at the prospect of rationing sleep given how horrible sleep deprivation has been to my life, and I’d been reluctant to pull the trigger.
Turns out horror like mine is common, and it feeds into the insomnia cycle. Awoken at night for whatever reason and impatient to fall asleep, I fixated on untold horrors that may happen the next day due to my inability to fall asleep. These foretellings compounded the problem. I’d noticed this connection, and begun to notice certain days turning out pretty ok, even excellent sometimes. I’d even begun debugging my emotional response, but this process was iterative and piecemeal, one enterprise within a life of many without a coherent plan and often set aside where other demands become urgent.
What I appreciate about Sleep-Sheep was it provided a simple, socially grounded structure to keep me persistently working towards consistent, quality sleep: log my sleep, follow the advice of an AI sheep, and meet once a week with my sleep coach. I stated at the beginning I would make a good faith effort to work with the program, and I would have felt embarrassed going into a weekly meeting with Luomei having shirked my objectively not that hard responsibilities. Meanwhile, the prospect of asking for a refund if the program failed because I hadn’t done the bare minimum felt horrifying. Squeezed between twin shame driven imperatives, I charged forward into the sleep deprived abyss.
And the abyss was… not that bad? Day by day, I muddled through, generating disconfirming evidence to the catastrophic foretellings disrupting my sleep in the middle of the night. My sleep wasn’t great, but it offered predictability. I could mostly plan a life around it. Going through the first couple weeks was difficult, but there was a frontier of incremental progress. In fitness, meditation, and clinical practice I have grown to love the philosophy of progressive overload, so the notion that a brighter future built on tolerance of present suffering slotted into a well developed philosophical infrastructure.
There is an archetype to someone who does not need the structure and support Sleep Sheep provides. A friend from practicum noticed a few months back she was having difficulty sleeping, implemented a CBT-I protocol, and moved on to living her best life. Said friend also had a successful career as a military dentist, recovered from a horrible muscular injury, and balanced a mildly rigorous MFT program alongside fulltime work. She makes frequent use of spreadsheets.
I enjoy thinking of myself as made from that mold, but past experience demonstrates I am not, and I prefer the consequences of realistic planning. So I pay $300 a month, Luomei gives inspiring pep talks once a week, and an AI sheep gives me psychoeducation as we structure my CBT-I program. Sometimes I backburner sleep improvement, and I don’t worry about it because Sleep Sheep will get me back on track in, at most, a weeks’ time. I am experiencing incremental improvements in a life domain that has strained my marriage and rootbound my career.
I think many people are more like me than my high conscientiousness friend. We prefer seeing ourselves as able to do the simple thing, yet, empirically, we don’t. It feels stupid to pay an exorbitant sum to a specialist to hold our hands, so we starve valuable self improvement avenues of investment—not because they lack expected value but because the pathway to that value is aesthetically unappealing. Paid help can bypass anticipatory anhedonia, allowing better lives of more frequent exercise, emotional regulation, and, in my case, better sleep. But paying for “simple” feels weird and shameful, so we don’t. For people like that who have difficulty sleeping, I recommend Sleep Sheep.
Dude, I so wanted to loop general argument for using paid services to overcome the anticipatory anhedonia that impedes progressive overload across many life domains, but, alas, I couldn't fit it in.
Trump's tactic of suing companies for damages, and then settling for a few tens of millions of dollars – YouTube just settled for $25M today – seems to be a very effective way of legally accepting bribes out in the open. This seems innovative enough I wonder if this is going to become a standard strategy, both in the US and worldwide; I think most countries have a legal mechanism for settling suits out of court.
No because it's too hard to execute the litigable action with plausible deniability. These suits are all downstream of things that happened on Jan 6. That's too indirect to be useful. No company is going to initiate a bribe by attacking a politician who has a 20% chance of being elected again in 4 years. Cool idea though.
Unless the settlement is in the form a a suitcase full of cash or a chest of Spanish bullion or a canvas bag clearly labeled "SWAG" . . . . personally delivered to Donald Trump, it's not bribery.
It's a tax that gets plopped into the general fund and squandered on Something Useless (tm) however you personally define "Useless."
Naturally I only support Federal programs that Any Reasonable Man considers "Essential," but All Those Other People constantly advocate for useless and wasteful spending.
Sad thing is that it's an irrelevant amount of money to both Alphabet And the US Treasury.
You might have missed the point. I agree that the companies he's suing would win in court, and the expense of the trial isn't significant to them. Neither is the money they're settling for: this is a way to legitimize a mutually beneficial transaction.
> Trump's tactic of suing companies for damages, and then settling for a few tens of millions of dollars – YouTube just settled for $25M today – seems to be a very effective way of legally accepting bribes out in the open. This seems innovative enough I wonder if this is going to become a standard strategy,
I'm not sure it is innovative. Just as the Texas anti-abortion law was described as "innovative" for copying the longstanding method that civil rights laws used to ignore the first amendment, this approach sounds a lot like the longstanding agency method of soliciting a lawsuit from an ideologically aligned group and then settling the lawsuit with an agreement to do something that they wouldn't have had the power to do if it hadn't been part of a legal settlement.
I expect that a lot of the effectiveness comes from everyone still treating Trump as an aberration. A one-time payment of $25 M is chump change to a company the size of YouTube, and more than worth the cost if it's all they need to do (even if it's multiple times) to weather the storm and wait for sanity to return.
If it started to look like other government officials were going to follow suit, suddenly companies would be incentivized to fight much harder. They keep armies of lawyers on retainer anyway. In a world where non-Trump officials feel free to do the same, fighting some drawn-out legal battles is enormously preferable to signalling that you're open to paying Danegeld to anyone with the power to ask for it.
I agree it's a trifling amount for any of these companies. I think this is "protection money" in a sense more real than simply as a euphemism for extortion: I think the implicit agreement is that Trump, in addition to not attacking them, keeps others with the power to hurt them – I'm thinking primarily of federal regulatory agencies here, but should also include non-Trump officials – from doing so.
"Caught" implies a crime was committed, and I don't understand what crime Youtube could have committed by banning the President's account. Surely Trump isn't claiming that the government should compel Youtube to carry his speech, right?
You might be missing the wood for the trees: that explanation seems narrowly tailored for YouTube and you'd need a different one for the other companies he's doing this to/with.
When I said you'd need a different explanation for each company, I wasn't asking you to come up with them. On the contrary, I meant that you needed a single explanation that works for them all without needing a list.
A lot depends on the response. If Dems win in 2029 and investigate each such case for bribery, then it will probably be shut down. If not then it just becomes an accepted tactic for anyone brazen enough to do it.
Supreme court has made it harder both by narrowing the definition of bribery, and saying the president is immune from prosecution (the obvious tactic would be to tell these CEOs that you'll throw the book at them unless they flip and testify against the president, who also might pardon them). There might be some legal hardball a future administration could play to get around it, but I don't know if they would.
I'm a 33 year old man that is exactly 5 feet tall. I'm thinking about getting on a dating app because, realistically, that's the best way that I'll be able to meet people to date. I'm dreading it though because I've heard that it's brutal for short guys, and I'm way on the end of the spectrum. I vaguely remember some study that was done where upwards of 95% of women wouldn't date someone my height. I don't even want to google to find the study because I remember all the info I could find on it before being pretty bleak.
I'm worried that a dating app will destroy the meager amount of self-esteem that I do have right now. How should I prepare myself for this? Am I overly worried?
End of the day: most women are attracted to whatever says "successful mate" to them, which probably can mean "physically capable", and by extension, "tall", but also means a great many other things, which can easily compensate for "not tall". Since you can't help not-tall, make sure the other things work.
Christina's advice about looking like an expert on something is sound. The more, the better, so either be good at lots of things or _really_ good at a few. So is her advice on looking like you care about how you look, which I take to really mean that you should care about taking care of yourself physically. So, chances are, you can look healthy (exercise + diet). This demonstrates self-discipline; everyone likes someone who's self-disciplined.
If you're concerned enough about self-esteem to bring it up, then that might be your first challenge. You're probably likely to get a lot of improvement if you can express confidence in yourself. If you're there, you're in really good shape.
I'm reminded of a fandom convention I went to years ago. That evening, I'm walking along, and I notice a crowd of guys trying to talk up a chum of theirs who'd just turned 25 and was still single. I keep going (to the men's room), wash up, head back out, they're still trying to boost him, telling him not to be worried. I lean slightly to the side as I pass by and say, "they're right you know". Then I turn around, thumbs at myself, "40", and give my best "I'm not worried either" pose. One of the boosters yells out "you are my hero!!". I grin, salute Mr. 25, and keep walking. They didn't have to know that the woman I was crushing on for several months mentioned that day that she was interested in another guy.
Today, I live with my GF now, steady as a rock.
There are _some_ women who like a guy with self-esteem issues (it comes off as vulnerability, and they're looking for someone they can nurse), but confidence probably puts you in a larger pool. Plus, it's probably healthier independent of whether you meet a nice gal.
Make sure some of your photos on the app show you expertly doing *something.*
Most women who are smart enough to be meaningfully attracted to someone smart enough to be an ACX reader/commenter are going to be profoundly attracted to the confidence which comes with competence in some skill that matters to others.
So if you play a musical instrument, double down on practice *and performance.* If you cook, get better at plating. Play a sport? Make sure it looks like you love it. If you're interested in topping in kink, devote yourself to learning rope bondage, especially rigging, and go to lots of public workshops (I was friends with a 5'2" older guy who developed a really amazing skillset in electrical play, and he had many kinky women following him around despite looking NOTHING like a conventional porn star). Etc.
The exception to this advice about highlighting a skill is video gaming. While there may be some women who deeply admire expert gaming, in reality, no there aren't.
You're a guy, with presumably normal male biological impulses, so you may be tempted to dismiss this advice because you're used to human attraction being ovwrwhelmingly visually based. Please try to ignore your own experience and take this on faith: Women are indeed different than men when it comes to attraction. For the smarter ones, expertise is as attractive as a long body and square jaw. Formula 1 drivers are small men who don't lack for women's attention, as are gymnasts and jockeys and musicians and and and...
You don't need to be famous, of course.
You just need to be objectively good enough at something to legitimately earn the confidence of being good at something.
>The exception to this advice about highlighting a skill is video gaming.
I would also recommend against a profile picture showing you fishing. "Man posing with fish" profile pics seem to be enough of a cliche to inspire eye-rolling among women of my acquaintance who use dating apps. My social circles (which lean very nerdy and very blue-tribe) are probably culturally less likely to consider fishing an appealing hobby, but even among women who do find fishermen alluring, I expect the market is saturated.
It’s a little known fact that the postal service didn’t actually have any rules against their employees dating women, it just worked out that way for me.
If it makes you feel any better, Robert Reich is 4'11." But he seems to have a sense of humor about his height (whether he did at your age, I don't know). So, maybe make light of it in your profile?
I completely understand that it hurts to be rejected/not-selected regardless of the reason: if you can develop the mental habit of only "counting" rejections for appropriate reasons (difficult, I know!) this might make it less distressing and easier to bear?
For example, if some woman rejects you but she's the sort of person you wouldn't want to date anyway, that's a great example of the sort of rejection that shouldn't count (if anything, this should count as a bonus - this woman you wouldn't want to date is doing the work -of removing herself from your dating pool- for you!)
If this system makes sense to you, I suggest that "I'm the sort of person who filters based on height rather than on personality, shared interests, even on looks/vibe more generally" would be an excellent candidate for the sort of person you probably wouldn't want to date anyway?
If the average match rate for a 6' guy is 1 per week (made this number up out of thin air as I've no idea really! The general point should hopefully be the same if you plug in different numbers) and the average match rate for a 5' guy is 95% lower (but it's cool: this 95% is largely made-up of women whom you mostly wouldn't want to date anyway!) then your match rate would be 1 per 5 months:
A) This isn't to be sniffed-at! Far more than if you don't join in the first place; if the first match turns out to be perfect that's a total of 5 months' wait to meet your soulmate which seems like a great deal; if the first match isn't perfect this is still good and healthy, see Scott's writing on micromarriges for more info: https://www.astralcodexten.com/p/theres-a-time-for-everyone
B) If you get more than 1 match every 5 months - you're doing better than your expected baseline, your other qualities are so attractive that *even some of the "I don't match with short guys" women are matching with you*(!), and you should actually feel very good about yourself!
I struggle to understand how you can describe someone as "moderately short" when they are above the average male height in the US (5' 9"). If I were the OP I would be unable to relate that comment to my situation and indeed would probably find it quite offensive.
It’s the distorted world of the Internet, where leading ladies are “mid” and normal guys are short because everyone’s reference points are online celebrities or even AI generated.
You might benefit from telling yourself that self-esteem is a decadent concept that is eroding the foundations of Western civilization - it does seem to be holding you back.
I would advise you to treat your situation like an emergency and sign up ASAP. Take time off work if necessary. Sign up for as many as possible and ask yourself if you're willing to date fat women/single mothers/etc. There are even dating apps especially for fat people.
What's your race? Ngl, it will be pretty brutal so be mentally prepared. Prepare for the worst, and even hope for the worst honestly. It sucks but you have to play with the cards you have been dealt with unfortunately.
I'm white. The worst case scenario is getting no matches and a small number of mean spirited messages. I'm thinking that cruel messages aren't super likely but no matches is realistic.
That's pretty good advantage to have in the dating market. You have an ok chance with East/Southeast Asian immigrant women I'd say. Get premium version of 2 dating apps at a time and swipe somewhat selectively for a few months. Watch out for scammers, you may make mistakes here and there at first(asking out too soon, asking out not soon enough haha) but with time you should get the hang of it. Have a decent bio(job if its something that pays well + 2-3 hobbies). Look online into examples of bios and fine tune yours accordingly. 3-4 good photos(no sun glasses, with different clothes at different locations). Also remember most guys get very few matches to begin with, with you it will be even slower, but its a grind, and if it works out it will be worth it. Don't spend more than 5 minutes on dating apps per day. Be disciplined about that. If after a few months, you feel like its hitting your self-esteem, just delete the apps for a few months and focus on something else. Come back to it when you are ready,
Seconding some of the advice on the photos, and I'm going to escalate with advising looking into professional portrait photography. I wouldn't go so far as to do a formal corporate or actor-style headshots, but rather "candid" (those quotes are doing a lot of heavy lifting) portraiture by a real expert who knows how to make a person's face a story.
Everyone with a phone thinks they're a photographer, but someone good enough to charge money will catch your face looking compelling, even if you're "ugly."
I don't think professional portrait is going to work. It comes across as trying too hard and again girls see through these things. His face is not the issue either. He is probably attractive enough, its just the height thats the issue, which professional photography won't help. Professional photography will help if you have an ugly face but average height. But again no harm in trying different things I suppose.
Men who care *enough* about what they look like, which is to say, care at *all.* This covers the pretty basic stuff of having good hygiene and basic grooming, dressing in clean, well-maintained clothing which fits their body, and so on.
A photo which communicates "I care enough about making a good impression to share this excellent photo" conveys all of the above, plus more.
Most women (not "girls," the OP is 33!) are not going to care where a compelling photo of a man came from. A "candid" professional photo of the OP flambeing a crepe or etc can be explained as a "candid" shot someone took during a dinner party.
I'm pretty skeptical of the claim that apps are the best way to meet people to date. I don't have skin in the game, so my opinions are admittedly perhaps armchair, but maximizing face-to-face interaction especially in disproportionately female spaces always made more sense to me for the meeting people part.
That works if you are already attractive or have insane charisma. There's a limit to how many people you can meet in real life. But with dating apps, if you live in a large city your pool is in hundreds of thousands. I am quite ugly guy and the few dates I have been through have only been possible due to dating apps. If I lived in the 90s where I had to meet girls in clubs or something, I would have been cooked haha.
Damn. If you're right I've been giving people terrible advice. Nonetheless, I think charisma is pretty learnable? Like, I suspect that if someone read a book on therapeutic microskills, and did loving kindness meditation for somewhere between fifteen and sixty minutes a day they'd have the basis for iterative improvement? I am myself fairly charismatic and reasonably attractive though, so I always wonder if I'm just full of shit on this subject.
That is a very fair question. Although it would behoove me to have an answer given how strong my opinions on this subject are, I have never actually assembled a curriculum or anything. If you dm me I'll give you my number, and I'd be happy to chat about the subject especially consider everything below is pretty ad hoc.
Back to your request. think my suggestion would be some mix of microskills books, meditation books, and therapy books. Your goal here is to be able to feel ease and love so that you can express that reaction to others, sincerely enjoying their company. For the microskills book I read Intentional Interviewing and Counseling by Ivey and Ivey. It's not, like, great, and if someone knew a good book on therapeutic microskills I'd be really interested, but it covers encouragers, reflecting, and and summarizing which are really what you want.
For meditation books, what you're going for is loving kindness. If you can cultivate an intense experience of love, you'll get the ability to convey it to others, which will make you really enjoyable to be around. I really like I'm Right You're Wrong by Ajahn Amaro and Broad View Boundless Heart by Ajahn Amaro and Ajahn Pasanno. The authors are Buddhist monks, so if you prefer something secular I think Judson Brewer has guided meditations, and Sam Harris might cover loving kindness in Waking Up?
What therapy books are good kinda depend your personal taste and what's difficult for you. Generally, what you're going for is excellent self esteem, non anxiety, and communication skills. Grading by enjoyability of writing and and usefulness of content, Judson Brewer and David Burns both wrote good books. The New Peoplemaking is also great if you have difficulty with your family of origin.
You can learn enough charisma to get on with day to day life(work, school etc.) but you are not going to be charismatic enough to pull girls while being conventionally not attractive. That kind of charisma is natural. Girls are not dumb either. They are insanely perceptive and can sense you are just following the steps from the How to be Charismatic guide. There will always be a small number of girls who like a specific type of guy(short, autistic, ugly, goofy, awkward) whatever. They themselves might not fit into any of the above categories but for whatever reason they like them. Our best shot is to cast a wide net on the dating apps and try to find them. But the process will be absolutely brutal and there's always a good chance it might not even work out. But again life ain't fair.
Hmm. I think there's truth to what you're saying. Like, I have a friend who seems to work from a playbook and sometimes he comes across false. Meanwhile, my experience has been mediated by being conventionally at least mildly attractive. Still, for me across the arc of my life, learning charisma has _felt_ like a skill. Like, I have over time a more coherent notion of how to befriend someone, or improve someone's day, or ask a store employee to do me a favor. My friend who read the books too, he genuinely can carry a conversation with a stranger better than most.
Do you apply your suggestion to middling attractive people? I haven't actually advised anyone who is _un_attractive. But a lot of people who have strengths seem to obsess over their deficits. While not contradicting anything you said, I remain confident a big chunk of attractiveness/charisma is intrapersonal and learnable.
Honestly no harm in trying. It's cliche but everyone's different. There's a million subtle different things about us physically and behaviorally which is perceived subtly differently by everyone else as well.
I will take back what I said before, and advice anyone reading this to just try different things out. See what works and on whom? Maybe the steps from the guide to Charisma might just work out for you. Even if it doesn't work on girl 1, it might work on girl 2, who knows? But you still will be interacting with people and that alone goes a long ways towards improving your social skills.
Yeah, I think I imagine human relationships as being fairly, well, Pavlovian. Like, if you interact with someone and bring them delight, it's easy to imagine wanting to interact with you more. If you sincerely enjoy the exchange, it's much easier to put forward signs of interest since it's very low stakes to refuse you (you enjoy them regardless). Having female friends is a good way to get introduced to eligible women, and statistically some quantity of people are in fact single and interested. I suppose my model supposes that it is possible to learn to feel and spread joy, which not everyone agrees with.
Per your points though, it's super great to do stuff with people because as you say it's a fun way to connect—plus if there isn't romantic compatibility, you still had fun hiking or whatever!
Bruh alright. As someone else said if you put your height up front hopefully nobody will match with you and send hurtful messages.
That being said there is a real grind and soul-sucking aspect of it. For guys of average height it's already madness. Even I thought I was desensitized but the constant bs just hurts.
That's your greatest risk. The grind. If you can withstand that then maybe you'll make it out.
I'm skeptical of the 95% figure. I haven't heard it before, and when I went looking for it just now the search results sound like internet folklore based vaguely on some survey that someone once did about how many women are interested in dating men who are shorter than they are. The impression I get is that among straight+bi women, a small but vocal minority have a strong preference for dating taller men, most have a mild preference for taller men but it isn't a dealbreaker if you're attractive to them in other respects, a nontrivial fraction don't care at all, and a few probably have an active preference for shorter men.
Even if the 95% figure is accurate, five percent of the millions of women using Tinder or whatever is still a rather large number of people in absolute terms. Some fraction of these won't be interested in you for other reasons, or you won't be interested in them, or both, but you only need to find one good match for the search to be worthwhile.
It's a pretty good starting point, because women generally only swipe on ~5% of men to begin with, so you're necessarily going to be addressing a small pool:
I don't think the overall swipe rate is a good proxy for height filtering, since women make that decision for any number of reasons. Some of those reasons are presumably highly correlated between different women (conventional attractiveness, decently-written profile, etc), while others are less correlated (cultural signifiers, shared interests mentioned in profile, reminds her of an obnoxious ex, etc).
The "is much shorter than you" chart is consistent with my expectation that a many women prefer taller man to varying extents, but many care little or not at all about height, and it's one factor among many for at least some of the women who do care about it. Note that by that chart, 36% of college-educated women and 52% of non-college-educated women self-report not caring about men being much shorter than they are.
I'm not 100% sure how to read the Bumble chart without more context. Based on the range of the Y axis, I suspect it's saying that among women **who set height filters**, Y% of those filters include men who are X height. If that's what it means, then it doesn't really tell us how many women set height filters at all. It does tell us that an awful lot of women who do care about height aren't interested in dating professional basketball players or men who have to duck when going through doorways. It also tells us that minimum height filters are set at a variety of levels, many to only include conspicuously tall men, some to exclude men of below-average height, and some to exclude men who are at what is presumably the filterer's own height give or take a few inches.
This sounds right to me. I'm a tall woman, happily married for almost 30 years to a man much shorter than I am, and I could not give less of a damn about his height / our height difference. He makes me feel awesome and is truly my better half. And, as my grandma once said "they're all the same height when they're lying down" ;)
“Even if the 95% figure is accurate, five percent of the millions of women using Tinder or whatever is still a rather large number of people in absolute terms.“
Exactly! It’s a numbers & time game, and one has to throw their bait in the water where the fish are and wait. Just cuz a fish bites and you don’t land it doesn’t mean a fish won’t come along later!
If it makes you feel better, I estimate that if you're otherwise about average, upwards of 95% of women you "swipe right" on aren't going to date you anyway.
Just put your height on the app so it’s visible. Anyone who matches/chats with you will be aware of your height and is unlikely to be a dick. I thinks it’s probably a great way to meet girls that are interested!
I'm definitely going to be upfront about it. I think one of the pictures I'll upload will make it obvious that I'm smaller while (hopefully) still being flattering.
Good luck! And remember… you have a pretty awesome camera in your pocket than can provide solid photos. I made a homemade little camera stand for my phone to take some good pictures of myself and it made a noticeable difference for the amount of likes I get. As guys we don't often get good pictures of ourselves taken out in the wild… it’s not necessarily “cool” to set-up your own little photo studio for your dating profile but if it can be worth it if they’re staged well.
The book "Homo Carnivorus" settles the "is meat healthy or not?" question for me. It makes a strong and substantiated case for why meat is (probably) healthy. If anyone cares about that, I would recommend you read that book. It basically got me to stop wondering and worrying whether what I eat is beneficial to my health or slowly killing me.
If you're on the fence about attending a meetup: I strongly recommend going! I'd never been to an ACX meetup before, and I was worried I wouldn't fit in or it would be awkward, but everyone was super welcoming and I had a great time.
I have seen several blogs whose authors do not use any capital letters. This is clearly a deliberate choice, as the authors are skilled in writing. Does anyone know what the meaning / intent behind this decision is?
I've heard it called 'lapslock' and to people from certain social media spaces it reads as an informal, casual tone, a bit laconic. Having spent time in places where this and other idiosyncratic punctuation is common, I do indeed get cues about emotional tone from where a poster uses or omits punctuation, and it can be used for humorous purposes. Think "What?" versus "WHAT" versus "what."
Reading long form text in lapslock is rather obnoxious though.
It’s a little-known fact that while there are a very large number of capital letters out there, their numbers are finite. With their unprecedented and indiscriminate use in Truth Social posts, their TFR has fallen below replacement levels.
I suspect that these writers are concerned citizens drawing a lesson from the decimation of the vast herds of American Buffalo and are simply trying to preserve them from extinction.
TYPICAL "PEAK CAPITAL" PROPAGANDA. IF CAPITAL LETTERS WERE TRULY RUNNING OUT, WHY HASN'T THE COST OF PRODUCING THEM INCREASED? WE KEEP FINDING NEW RESERVOIRS ALL THE TIME, AND MOST COUNTRIES HOLD VAST STRATEGIC RESERVES TOO. I, FOR ONE, GREW UP WITH A DIZZYING ARRAY OF CAPITAL LETTERS, REFINED INTO ALL SORTS OF FONTS AND ITALICS, AND TO STOP USING THEM OUT OF SOME MISGUIDED SENSE OF DO-GOODERY WOULD BE TO ROB OUR KIDS OF THEIR FUTURE! I'LL BE ROLLING CAPITALS FOREVER, JUST TO SHOW YOU!
Im not totally sure why it originated, but I think it’s nearly always a feature of girlblogging on tumblr, or that stems for a tradition of girlblogging that began on tumblr
2. Author is an eagle typing with the talons of one foot while perching with the other foot
3. Author's shift key and caps lock key are both broken.
4. Author is typing on a phone and finds the extra tap required to make a capital letter cumbersome
5. Author is of an age bracket where text/chat speak is the standard casual register for written communication.
6. If all-caps are shouting, then all-lower-case is whispering, and the author wants to use this to create a sense of intimacy and sharing secrets with the reader.
Of these, 5 seems the most likely, followed by 1 and 4.
I think it's supposed to exude a feeling of informality and intimacy, as if you're just exchanging text messages, instead of reading hoity-toity long-form essays in the New Yorker.
My guess is that it makes it feel more stream-of-thought as opposed to polished, so that you see the writing as a look into the author's mind rather than as text that stands on its own.
It's not really my thing, but this person is one of the funniest writers I've discovered on Substack, so I ignore it. I'd also like to chime in with TotallyHuman and say I'm not sure what the actual effect is supposed to be: your guess seems as good as any.
It strikes me, reading that, that we don’t really need capitals. We do need quotation marks though - there’s a tendency for modern authors to drop that, it will get old fast.
> We do need quotation marks though - there’s a tendency for modern authors to drop that, it will get old fast.
I understand that Old Chinese texts use different verbs for introducing quoted speech. There are no quotation marks, but the distinction is drawn anyway.
You might see a bit of this strategy in English, where "said" might mean anything, but "quoth" can only report quotes.
(Although browsing through the wiktionary citations, "quoth" appears to be able to report indirect speech before the 20th century.)
Hello all bloggers and wannabe bloggers in the "rationality-adjacent" sphere!
You have probably heard about https://www.inkhaven.blog/ -- and if you have not, you might want to click that link and read it right now. Long story short, there will be a 30-day training for aspiring bloggers, where they can receive some wisdom from their more experienced colleagues (including Scott Alexander), and in turn they are required to post 1 article on each of those 30 days, or be kicked out of the camp. Publish or perish! You need to be actually at that place, for the entire month.
And by the way, the deadline to apply is tomorrow. (Not sure if there is still any place left.)
If you are like me, you are probably complaining about the cruel fate that doesn't let you take a month of vacation exclusively for your hobby. (And if you have the time, you are probably unemployed and don't have the money.) Even if we skipped the part where you have to be there in person, and allowed online participation, there is probably no way you could write 1 article each day. Luckily, there is an alternative.
It will be half as intense, but twice as long: to produce 30 blog posts within 61 days, during October and November. We will start sooner than the Inkhaven group, and finish at the same time, hopefully with the same output. That means making one post about every two days, approximately; the rules will be much less strict, you won't be kicked out if you don't have anything posted by day 2, but you will be if you don't have anything posted by day 7, because otherwise what's the point.
Languages other than English are allowed; videos are also an acceptable medium; it is not necessary for all 30 posts to be on the same blog; topics are not specified just please don't post anything that would get you banned in an ACX Open Thread. Also, no AI-generated text! If you already have a blog and don't want to ruin it by suddenly writing too much with lower quality, it is okay to create another blog for this purpose. It is okay to publish pseudonymously. There is no reward other than your own feeling of accomplishment, and some peer pressure to perform.
tldr Looking for AI safety events in Bay area in January 2026.
I am a mathematician visiting Stanford University in January 2026. In my spare time I've been trying to work on AI alignment the last couple of years, but feel somewhat isolated. Any suggestions on people to talk to/places to visit while I'm there (events/conferences/bothering-random-researcher activities)? Thank you!
That would be great, thank you! I will send you an email closer to the date - is @eulercircle.com email address I found on your web page a good way to contact you?
Re-announcing that we're recruiting for the cholesterol:coprostanol study (https://docs.google.com/forms/d/e/1FAIpQLSf_BXwlEJaGxtQVtOpTzLMgpCmzLbA171izWx0EfSBBAKnvOw/viewform). We'd particularly love participants in the Bay Area, so we can do some real-time testing with the probiotic, to look at engraftment and whether serum cholesterol levels change after the species is introduced—but we'll take people from anywhere. Signup is free and participation is easy; let me know if you have any questions!
So in a previous thread someone suggested, that I could mute people on substack and not see their replies. Well I did this and I'm still seeing replies. Do I need to block them? And then what does mute do?
yes, try blocking to get rid of their comments. i am confused about this too. I guess "mute" only refers to direct messages and "block" refers to everything.
This is kinda an extension of something I wrote as a reply below...
How sure are we that returns to intelligence are linear or better, especially across all/most areas of life? It seems that ASI predictions rest of an assumption that going from (the equivalent of) IQ X to IQ X+10 provides the same or greater benefit regardless of X and regardless of what you're trying to do with it--that a super-smart entity would be super-persuasive and super-capable in all aspects of life they tried to do--ie that intelligence is a general superpower.
Can someone steelman this assumption?
Because my experience is the reverse--that IQ (intelligence generally) is strongly subject to diminishing returns and behaves a whole lot more logistically. You get great returns going from sub-normal (~80 IQ) to normal (~100) and pretty darn good ones going from normal to "genius" (~120 IQ). But even that latter jump is more narrow than the previous ones. An IQ 80 person isn't going to be very persuasive or capable, and is likely to have lots of other "co-morbidities" (very present oriented, limited ability to really consider other people's experiences, etc) relative to an IQ 100 person. But I haven't seen the same level of increase in anything but sheer academic capability (which often doesn't translate into other fields even of relatively intellectual pursuits) from IQ 120-ish people. In fact, I've often seen *regressions*--people who are really smart often struggle to talk meaningfully to "normal" people and fail to connect to how they see things. Which suggests to me that IQ starts losing a lot of its punch the higher you go. And may actually correlate with *reduced* performance in other aspects of life.
I also don't see very many highly-charismatic super-smart people. Are chess geniuses (all very high IQ) good politicians/people-persuaders? Not that I've seen. Are hard-science Nobel Prize people that much more moral or capable than others? Not that I've seen...many of both sets tend to be cranks and pretty incapable outside of their narrow specialty. Even *within* their broader specialty (e.g. physics), a genius at quantum mechanics isn't better than most smart grad students at, say, general relativity--the skillsets and knowledge base are too different. And they're not good at all at, say, organic chemistry.
My background is in academia (PhD in Computational Quantum Chemistry), but I've also served as a missionary in Eastern Europe, worked with lots of uneducated people as a teacher and with community service, and am currently a programmer at a non-elite smaller company.
A slow thinker considers how important his lack of speed is, and concludes that slow thinkers and fast thinkers do about equally well in life, and he's arranged his life to have room for him to take the time he needs.
I wonder whether IQ tests select for fast thinkers and might miss out on some slow but good thinkers.
Anecdotal, but when I was tested for the gifted program in 5th grade, my IQ test came back abnormal because I spent so long ensuring correctness on a few sections that it timed out. So it looked really low on some sections and high (not genius) on others.
So I can relate. But then I've also seen that smart people can usually get to an answer faster, so... Not sure.
I've often thought that my particular brand of intelligence (which isn't top tier by any measure, but was enough that I only had to start trying in grad school and I passed the preliminary exam on the first try, which is rare at that school especially since I didn't study at all for it) is more about making connections between things I know and interpolating from a wide base of knowledge than about raw horsepower or creative spark. Generalist vs specialist intelligence, maybe?
I'm not inclined to grant your premise. The most intelligent people I've met or worked with (thinking world-class here) have *without exception* been the people who are best at "being people" -- they likeable, wise, trusted (and trustworthy), have lots of friends, lots of admirers, if a problem (in any field) comes up, then they are the people who *other* people turn to first.
I have little doubt (but see below) that if society were such that reproductive success were primarily determined by that kind of "popularity" (influence or wisdom might be better words), instead of incompetence-at-using-contraception (;-), then they would be the most successful breeders.
One counter-point, though, I've seen more (clinical) depression among them than in wider society. I suspect that to some extent survival as a human depends on being profoundly mistaken about the some aspects of the world -- they most intelligent just don't seem to be that good at being wrong.
Some examples (admittedly, not people I've *met*), Turing (he of the machine) was widely liked (or loved -- non-sexually) was a great uncle, very practical, and an excellent athlete; as I understand it von Neumann had many friends and could not sanely be described as impractical; Wittgenstein (my favourite genius) had many admirers and was much liked (unfortunately poor Ludwig himself wasn't among them), he was working on jet engine design before he switched to philosophy (I'll grant that he was a famously crap teacher...).
Another datum: here in the UK it isn't seen as good to comment on the height of one's own intelligence. It is also not seen as good to comment on how rich you are. The stereotype of the LOMBARD ("Lots Of Money, But A Right Dick") is well-entrenched (and actual Lombards are not rare). There doesn't seem to be parallel concept for intelligence (or many people exemplifying it) -- presumably because the highly intelligent nearly always have the people skills to recognise and obey the social prohibition.
I suspect that the IQ (or whatever) that people have gives them better ability to do *what they want to do*. If, occasionally, someone wants to be a driven, hugely rich, near-sociopath, then intelligence will aid that too.
I present a hypothesis. That intelligence (or rather returns-on-increments-in-intelligence) is not intrinsically asymptomatically limited. Rather, I suspect that the main value(s) of intelligence is effectively navigating to the human world. Just as it's an advantage to be a couple of inches taller than most people, its an advantage to be a bit more intelligent than most people. Just as there's a limit to how tall you can be *and gain more advantage that the costs* (about 6' 3" here in the UK); there's a point where being more intelligent that your peers is essentially pointless (I have no idea what the actual *costs* of being still more intelligent would be -- that's a weakness in my position). If everybody had IQ 200 (in today's IQ points), then the people with IQ 230 (say) would benefit. if the average were IQ 2,000, then the folk (or machines) with IQ 2,300 would be best able to arrange their worlds to their liking.
So, no asymptotic limit, just a cut-off on what's actually valuable in the here-and-now.
We may be (I suspect we are) in the middle of an arms-race (played out over that last few million years and projecting forward to extinction); we *might* have reached some sort of limit-point, but I see no strong evidence of that. We *may* be adding another set of players to the pool; those players *may* be better at whatever-they-want-game-to-be than we are (they certainly will be *if* we can come up with a couple of significant advances in AI (more significant than LLM's -- which I see as, at most, a small component of this hypothetical AI -- perhaps contributing to the UI)
It seems fairly obvious to me that there are threshold effects. But set aside any logistic saturation - just look at empirical effects in the world, and you'll see there are outsize returns to higher IQ / ability in general.
Largely, most economic growth, company creation, patents, and technological progress comes from the top 10-20% of people in a given nation, and it gets more concentrated the more you go up. Arguably, something like 60-80% of “progress” is driven by the top 10% or better.
Ivy leaguers are only 0.5% of the people in the US - yet 20/21 presidents in the last 100 years have been Ivy leaguers. 100% of Supreme Court Justices. 41% of Senators, and 20% of House representatives. 50-60% of federal appellate judges, and 30-50% of state Governers and Cabinet members.
But it’s not just Ivy people!
60-70% of patent authors / holders have a graduate degree (usually in STEM fields), and STEM degree holders are 5-10x more likely to hold patents than non-STEM degree holders. Phd’s file 5x more patents per capita than bachelor degree holders.
Yet the percent of the US that has a graduate STEM degree is only 4-5%.
If you look at the unicorns of the last couple decades, the founders are generally Ivy educated, and from wealthy and connected families. Since just the Ivy league is “0.5% or better,” you can see the rough degree of concentration.
In fact, in general, if you look at normalized Rausch IQ scores versus problem difficulty, solving complex problems gets exponentially more difficult the harder the problem, and you need to go further and further out on the IQ and ability curve to even have a chance of finding a solution.
“This means that for the hardest problems, ones that no one has ever solved, the ones that advance civilization, the highest-ability people, the top 1% of 1% are irreplaceable, no one else has a shot. It also means that populations with lower means, even if very numerous, will have super-exponentially less likelihood of solving such questions.”
Hence, progress being driven most by the extremes of the bell curve in ability and IQ and background.
These have all been <<1%-tier people so far. Now extend this out! Sure, this is the tippy top, but think of ANYONE you know who’s filed a patent or started a company or small business, or done something that impacted a lot of people positively. Odds are, they are smarter, more conscientious, more educated, and from wealthier backgrounds than average - and not just by a little, but by so much they’re likely in the top 5-10%.
Overall, you can see this top 5-10% of people are punching FAR above their weight when it comes to economic growth, company creation, patents, and technological progress, and in fact, this tiny slice of humanity is likely driving the overwhelming majority of those things.
The above are from a post I made on high human capital fertility, you can read the whole thing and see the images and links in situ here:
It's not just about sample size, it's about genes. And to get better genes, you need smart people to have more kids in order to have more chances of producing something exceptional. We haven't reached the peak of human evolution yet.
I'm somewhat (but not totally) skeptical of IQ as a general concept, and I think this illustrates a bit of why:
" An IQ 80 person isn't going to be very persuasive or capable, and is likely to have lots of other "co-morbidities" (very present oriented, limited ability to really consider other people's experiences, etc) relative to an IQ 100 person. "
I remember reading someone--I think it was Nassim Taleb--arguing that most of the spread of human IQ scores was actually cause by pathologies that worsened people's cognitive function, or something like that. And while I suspect he may have overstated the case (shock!), it seems likely to be a significant factor.
Consider, for example, who you would expect to score better on an IQ test: a "naturally" IQ 80 person and a "naturally" IQ 100 person with a severe headache. How about for charisma? I pretty damn sure I'm less personable when I have a headache. Now consider how many physiological maladies might be less obvious and apparent than a headache, but still broadly impact somebody's ability to perform cognitive tasks[1].
If this is sort of thing does play a significant part in determining IQ scores, it would naturally explain a lot of the diminishing returns: the differences at the top end of the scale would largely be differences between "little impairment" and "very, very little impairment." Which might be important when doing very subtle, tricky, sustained bits of thinking like math and science problems, but aren't going to look very different in most other areas of life.
[1] This has been in my thoughts a lot lately, as I've
A. recently had some intermittent health issues that effectively seem to make me dumber when they flare up
and
B. noticed a modest amount of evidence that I might have had these issues for many years, but that they were mostly too subtle to notice (while still being somewhat impairing).
> "Consider, for example, who you would expect to score better on an IQ test: a "naturally" IQ 80 person and a "naturally" IQ 100 person with a severe headache. "
Still the person with the IQ 100 (and especially the person with the IQ 120, or 140), assuming the headache isn't literally physically debilitating.
But more relevantly, I'd generally expect the IQ 100 people to be much better at usefully updating their priors about *anything* under the stress of pain, while the IQ 80 (and especially IQ sub-80) people often don't have the capacity to do that even when they are in the peak of health.
>"[1] This has been in my thoughts a lot lately, as I've
> "A. recently had some intermittent health issues that effectively seem to make me dumber when they flare up "
How many IQ 80 people (for lack of a better term) do you know very well, having interacted with and observed them for a long time?
I'm guessing not many, because you're understandably assuming that an IQ 80 (or lower) person is just a more extreme version of a smart person like you being less-smart under stress. I don't blame you, that's how I used to model low IQ, too, at least until I spent a LOT of time working with IQ 80 (and perhaps below) people and observed over *years* that there are intellectual tools around observation and self-reflection that they simply didn't have and which couldn't be taught.
I've been sitting on a draft of an essay expanding on the comment I made here (https://www.astralcodexten.com/p/open-thread-314/comment/49094023), but it's been a mighty struggle to find a way to explain to people smarter than me that *NO, REALLY*, there are people who are so stupid that smart people can't even model their mental state.
"How many IQ 80 people (for lack of a better term) do you know very well, having interacted with and observed them for a long time? "
The concise and correct answer is "I have no idea, since I don't go around handing out IQ tests." The only person whose IQ I can reasonably claim to know is my own, and in practice I'd have to look up a conversion from an SAT score.
However, while I wasn't thinking about it when I wrote my original reply, I actually have quite a lot of relevant background. I worked for many years as an educator, putting in well over 10,000 hours in some mix of tutoring, TAing and teaching very small classes. If I pared that down to just contact hours, and further paired it down to just hours when I was offering direct help to an individual student[1], I expect the total would still likely exceed 10,000.
Permitting myself some unprincipled guesswork, however, I'd estimate the answer to your question to be at "very likely more than 0, though probably rather less than 10." Without particular effort I can recall names, faces and general dispositions of perhaps five people who met all the following criteria:
1. Appeared to me to have a very difficult time with academics generally and with mathematics (what I was most often teaching) in particular.
2. Had been identified by some outside authority as someone needing fairly intensive and long-term help to progress academically. Usually but not always this meant they had an IEP. Usually but not always they were high-school age.
3. Worked with me often enough and closely enough for me to get a good sense of how quickly they learned things, how well they retained things, how these varied between a typical day, a good day and a bad day and how common each of those three were.
A few observations:
A. For most such students, the difference in progress between a good day and a bad day was extremely pronounced.
B. In some (but not all) cases, there were one or more readily apparent reasons why some days saw less progress than others: for example, one student had regular trouble sleeping, to the point that they would often fall asleep in front of me. "Fall asleep in front of me" days unsurprisingly involved much less progress than "reasonably alert" days.
C. I also worked with a number of quite talented students, some of whom displayed similar tendencies to what I describe in A and B. In fact, I recall one very talented student with very similar sleep troubles.
D. Certainly if you compared good-day to good-day or bad-day to bad-day, the mathematically talented students would certainly outperform the struggling students (obviously). But I would guesstimate that a bad day for one of the talented students would tend see around as much progress as a modestly-above-average day for one of the struggling students.
My sense is that we have some STRONGLY clashing intuitions on some combination of "what IQ 80 means in practice" and "what determines someone's performance on an IQ test."
First, IQ 80 is (to my understanding) low but not abysmally low. It's 1.33 SD below the defined mean, which means (if the distribution is properly normalized to the population) around 9% of people have IQ that low or lower. This means that anyone who isn't a hermit and doesn't live in a bubble that's strongly filtered for IQ[2] should know multiple people of around that level. To my understanding, the threshold to be considered to have an "intellectual disability" is IQ 70[3], which 80 is well above. I'll note that I was NOT trained or qualified to work with people with intellectual disabilities, and would never have been put in a position to. So while I'm not comfortable making specific guesses or estimates about any student I worked with, I am comfortable assuming that they were all cleanly above this threshold. But also Scott discusses here how our perception of what people with lower IQ scores are like is *heavily* distorted by their correlation with other sorts of disabilities[4], which don't necessarily hold as cleanly as we expect:
Second, your life outcomes aren't usually going to care whether you had a bad day when you took an IQ test. But they care *quite a lot* about how well you think and learn on average. I talked above about the day-by-day *progress* of students at different ability levels: and while there could be overlap in the individual days of students at quite different levels, the cumulative effects painted a quite different picture. Even over the course of a few months, the difference in how much one student learned vs another student could be immense. So when you say you think an IQ 100 person[5] would outperform an IQ 80 person even in spite of a headache, I find that VERY hard to believe. Maybe I'm just more susceptible than most, but moderate-to-severe pain (well short of "physically debilitating") sure does hamper my ability to think clearly. It doesn't make me forget things I've already learned well (I expect I'd get all the "gimme" questions on a test nearly as well), but if I'm in significant pain and find myself needing to reason through a novel problem, the FIRST question I will ask myself is "can this wait until I'm not hurting?" My understanding of IQ tests--and maybe I'm off base here--is that they're supposed to depend as little as practical on specific accumulated knowledge or the practice of specific skills.
[1] Which is to say, all one-on-one tutoring, and those portions of teaching and TAing in which I was answering direct questions or providing in-depth guidance to an individual.
[2] Which, to be fair, I think many SSC readers do. I'm pretty sure *I* currently do. I just haven't always, and have a lot of experience outside it.
[3] With the shape of the normal distribution meaning that only a fairly small minority of even the sub-80 people fall below this threshold.
[4] With the obvious and oft-repeated alternate interpretation being that Lynn was just a garbage researcher who took garbage measurements. But that should still significantly raise our skepticism that measurements at the bottom end of the distribution are as useful or intuitive as we believe in general.
[5] Note: perfectly average! Probably not great at any of the skills being tested. With regard to math particularly, I think I also have a pretty clear view of how good the average person isn't.
When I said, "IQ 80 people (for lack of a better term)" that "better term" I was lacking was a politically correct and/or polite way to say "stupid."
That's why I had the parenthetical there, to signal that this was not really a discussion about the objectively validity of IQ tests per se, but rather a discussion about the kind of people who score low on IQ tests and, more importantly, all other tests. I did get around to using the word "stupid" in my final sentence, but clearly by that point, referencing IQ at all was a tremendous distraction.
When I say "stupid" or "low IQ," I'm thinking about a former coworker who deliberately smoked and drank while she was pregnant and gave birth to a child with fetal alcohol syndrome, and who couldn't comprehend the difference between a mortgage interest schedule and compounding interest on a credit card. I'm talking about a different coworker who simply *could not be made to understand* the difference between a health insurance premium, a co-pay, and a deductible, not even with a written guide in front of him and a patient, point-by-point explanation of each term (he ending up saying, "none of this is fair, I'm not paying any of this bullshit anymore, fuck them!"). I'm talking about a third coworker who very literally couldn't problem-solve through *any* minor deviations to his routine, not because he was frozen with anxiety or whatever, but because the ideas for how to solve minor problems simply didn't rise to consciousness. Whether it was a customer asking him a routine question about a policy, or the coffee-maker clogging and overflowing, or deeply cutting into his hand splitting a bagel, he never knew *what to do.* I learned to get between him and customers to answer questions, and to give him very specific instructions in small batches for everything else.
You and I could do our taxes or job on-boarding paperwork or answer essay test hypothetical questions, even with a very bad headache. These three? Not without help, no matter their level of health.
That third person was the first profoundly "low-IQ" (stupid) person I ever got to know very well. This isn't that surprising, to quote myself from the link I provided:
> "It's not their fault; the people writing comments here are almost universally living in highly intelligent "social bubbles" (https://slatestarcodex.com/2017/10/02/different-worlds/). They tend to have highly intelligent family, seek highly intelligent friends, and end up in careers which expose them to highly intelligent peers. They might not *consider* themselves to be highly intelligent - because they tend to socialize with highly intelligent people, they know people who are even smarter than they are - but nevertheless, they're highly intelligent and everyone they know pretty well is highly intelligent, and thus they instinctively model the minds of *everyone* from this perspective.
> "They can't *fully* model what it's like to be truly stupid; unobservant, incapable of dispassionate self-reflection, unable to accurately predict the consequences of a given action, unable to absorb information, unable to update priors. They can't model what it's like to have an entirely different set of motivational priorities based on an *inability* to think, and that's why so many of their suggestions about how to help and/or manage stupid criminals are ultimately unsuccessful."
I wrote those two paragraphs because they exactly described my experience and what I've observed in those like me. I was so in the "intelligent world" that It took me a six months of working with Third Guy full time before I finally understood that he wasn't willfully discarding good ideas to aggravate me, *he wasn't having them.* It took me another six months to stop resenting his need for constant supervision and protection.
I realize that can sometimes sound implausible to people whose "intelligence worlds" are far more closed than mine is. They either don't really believe in their heart-of-hearts that genuinely stupid people exist, or they only understand it on an abstract, surface level, the way people abstractly understand that foreign cultures are very different from their own but don't actually KNOW that until they travel to and spend time in one.
Hm. Maybe the foreign culture travel metaphor can be useful here. Gotta think more on that.
This whole post deserves a longer response[1], and I intend to write one. But I couldn't let this pass without comment.
"You and I could do our taxes or job on-boarding paperwork or answer essay test hypothetical questions, even with a very bad headache."
Wow. This...this is a sentence for sure. I think this is emblematic of absolutely everything that is wrong with the worldview that is proudly on display here. So let me say this with complete clarity.
No. No I could not. No I could not do my taxes with a very bad headache. I could not do my taxes with a moderate headache. Whether or not I could do my taxes with a mild headache would depend quite a bit on the circumstances. For that matter, so would my ability to do my taxes with no headache at all. And none of that has really any bearing at all on my ability to perform on things that are broadly similar to an IQ test.
I'm sorry that you've had bad experiences with your coworkers and that those have (apparently) made you jaded. But your view of human psychology is grossly and enormously oversimplified, as (I suspect) is your guesses about the social lives of others. There are more things in heaven and Earth, Christina than are dreamed of in your philosophy, including many, many human minds that *do not* fit the narrow schema that you've tried to define for them.
Doubtless there are some high-IQ-test-scoring people who live either hermit-like existences away from other humans, and know few people in general. Doubtless there are others who have managed to keep themselves siloed away an interact only with people very similar to themselves. But the world is quite a large place, and I tell you quite honestly there are loads and loads of people who BOTH score highly on standardized tests of various stripes AND have some combination of less-privileged backgrounds, breadth-of-experience and intellectual curiosity that ensures that YES THEY DO meet wide cross-sections of humanity. Probably a sound majority of my close friends and family would fall into that category: whatever your bad experiences with other humans, I expect some of them have had worse. Whatever your IQ score, probably some of them have or would score higher. Nor do I have any reason to believe they're especially unique--I run across media suggesting similar combinations of academic aptitude and worldly knowledge quite frequently.
And I tell you frankly, I none of them would have a high opinion of what you've written here. Not one.
[1] Oops, I guess that ended up pretty long for what was supposed to be a quick aside. But it still didn't really touch the main points I wanted to make.
I can see that you very obviously haven't had multiple, long-term relationships with the kind of low functioning / barely functioning people that I'm talking about, who can't do stuff LIKE, FOR EXAMPLE, THIS IS NOT A COMPREHENSIVE LIST, NOR DOES IT REFLECT THE LACK OF ABILITIES OF A SINGLE PERSON, American taxes, job on-boarding paperwork, complete an essay in response to a hypothetical question, comprehend their obligations under American healthcare, know all the steps to take when they get a deep cut in their hand at work (wash the wound, put pressure on it, check the wound after a time, understand that if doesn't stop bleeding, it requires a trip to urgent care / ER, call the supervisor to tell them you're leaving mid-shift and they'll have to find emergency coverage, etc).
I can see that because you said it, gave examples of how some people performed in math tutoring sessions with you (!!!), and because you are also apparently focused on the edge cases of smart people like you, who can write comments like these and tutor others in math but would somehow be incapable of doing your taxes with or without a headache.
(For what it's worth, the three people I wrote about above and a fourth I'm thinking about now would not be interested in reading the exchange we're having, or pretty much any other on ACX. If they were forced to read it (and one would not be able to), they would not be able to pass a reading comprehension test on the discussion with questions like summarizing our respective positions and then quoting sentences we wrote to make supported rational speculations about our respective backgrounds. If it were read aloud to them, they wouldn't be able to follow.)
Please read the Different Worlds essay on SSC and realize you are privileged to be in one, my friend. Those friends and family in your social bubble who would take a dim view of my observation that there are stupid people out there who are so stupid that smart people veryliterallycan't model that stupidity (and thus, don't really believe they really exist) are likewise in a social bubble free of stupid people.
You don't get it.
And that's okay! I didn't either until I started working with them, and I freely admit my social bubble made me so naive that it took me months of observation before I started to understand that there are people out there who are meaningfully not like me - or you, for that matter.
The observation that Taleb is a bombast who frequently gets out over his skis and wildly overstates any point he is trying to make is hardly novel, and not one that really needed to be made as such length. It gets quite boring and repetitive after a while; I ultimately stopped reading perhaps a little over halfway through, as it seemed a waste of time to continue.
As to the object-level question I was discussing, this post seemed to touch on it only lightly and only so far as needed for the author to talk more shit about Taleb. Meanwhile, the degree to which the author apparently *doesn't even notice* the underlying issues that would feed into that point was pretty irksome.
As a more general matter of courtesy, I think you should consider that replying with nothing but a link tends to suggest that the link is highly and directly relevant to the issue being discussed (which is not the case here). If you want to call attention to specific parts of a longer post, you can mention which ones and where to find them in the comment. Likewise if you have your own thoughts or responses, by all means type those as well. But there are many times more things to read on the internet than any one person could hope to digest, so dealing solely in long, tangentially-relevant links is not very respectful of the time of others.
>I remember reading someone--I think it was Nassim Taleb--arguing that most of the spread of human IQ scores was actually cause by pathologies that worsened people's cognitive function, or something like that. And while I suspect he may have overstated the case (shock!), it seems likely to be a significant factor.
That doesn't seem to mostly be the case. While syndromic retardation exists and is distinct from familial retardation (see here on the distinction:
>Brown et al. (2021) used data from four longitudinal cohort studies with 48,558 participants in the United Kingdom and United States from 1957 to the present for the the relationship between cognitive ability measured during youth and occupational, educational, health, and social outcomes later in life, and found that most effects followed a linear trend.
Indeed, if anything, the opposite is often the case, with growing returns to intelligence.
There's IQ, and there's all sorts of other qualities that affect, and are affected by, IQ to give you general effectiveness. The easiest thing is working memory. Imagine if you had a 100 IQ, but a working memory that could hold 256 concepts as easily as you hold 7+-1 today. I think that'd leave a 140 IQ person in their dust.
Now that I write that, it might be just as impossible for that size of working memory to occur in a human brain (diminishing returns again) as it would be for a 200 IQ.
I don’t think you can measure IQ like that. It’s a mapping to a standard deviation. Beyond 160 it kinda breaks down. ChatGPT assures me the smartest person out of 8 billion would max out at 193 but that’s not measurable anyway.
To your point, it's interesting that neuron counts for large animals with big brains (elephants and whales) seem to show that their brains are a lot less dense than ours. Meanwhile our brains aren't very neuron-dense compared to, say, a crow. Which seems to imply that there's an architectural limit of some sort which prevents large, neuron-dense brains. My suspicion is that we're out on the bleeding edge of that envelope (we have large brains that are also more neuron-dense than they should be) and that this is part of the reason why our minds are so unstable and prone to weird failure modes.
Birds need to have compact light brains, because they have to be able to fly. Some of them grow parts of their brain in the season when they have to sing, and let them atrophy after.
Elephants probably don't care how big their brain is.
We have space limits and we also need a lot of compute.
But that begs the question as to why our brains aren't denser, because if our brains were as dense as corvid brains then the space issue goes away. My point is that there are case-by-case explanations, but the overall trend seems to be that you can have small, dense brains or big, diffuse ones but not both. And the result seems to be that there's an absolute number of neurons in a given brain that it's hard to exceed. I'm not putting this forward as some sort of general hypothesis, mind, just an observation that seems to gel with the OP's point.
I've heard some people say that chess super-GMs aren't necessarily that high in IQ. Now, they clearly have some sort of outstanding spatial intelligence/ability to understand a sequence of moves. I've never taken chess all that seriously, but for me it's incredibly difficult to visualize something a whole bunch of moves in advance, even if the notation is listed. When I see Hikaru or someone on that level spit out 10 moves in a row, I'm completely astounded; but if I spent a ton of time studying chess, I'd expect my abilities in that area to improve. Also, I know there have been studies of chess players and their ability to memorize the positions of pieces on the board, and they do much better when presented with realistic positions and not just randomly scattered pieces. They're clearly learning how to chunk the pieces into units and memorizing those units. I think chess ability is not quite as correlated with IQ as you might think.
I'm sure there are a fair number of chess prodigies who have demonstrated great accomplishments in other areas, but some of them, just like Nobel winners, have been cranks. Speaking of the Nobel, didn't we just have a Nobel disease discussion in one of the other open threads? Some people, for lack of a better word, are a little bit crazy, no matter how intelligent.
But I think you are somewhat underrating the intelligence of politicians. A lot of them were Rhodes scholars, valedictorians, etc. For example, even people who hate Ted Cruz almost always agree that he's brilliant. I'd wager that most of the 535 reps and senators have an IQ over 120, maybe even 130.
Also, I'd dispute the idea that 120 is "genius" level. That's barely one standard deviation over the mean. Something like 5% of the population is over 125, and I wouldn't say everyone in the top 5% is a genius.
Now, I think you're right that the effects of IQ mostly level off at a certain point, and other factors play more of a role in success. But I think that at least up to 130, maybe even up to 140 or so, there are still pretty considerable gains to be had. I've taught high school for 15 years, and there's usually a pretty noticeable difference between the kid that gets a 35 on the ACT (99th percentile) and someone down around 30-31 (roughly 95th percentile). I'm willing to bet that on average, the life outcomes of the kids getting a 35 are quite a bit better than the kids getting a 30.
Sure. There are differences. But each 10 points is a *reduced* effect compared to the last 10 points. And that's the critical bit--you can be as super-smart as you want, but if the returns asymptote to 10% better...and especially don't generalize to all areas...
And I'd strongly push back against the idea that politicians are genius level. Smart (as in above 100), probably. But mostly they're just *polished*. And that doesn't actually take much smarts, just practice and preparation.
> IQ, actual IQ, tends to generalize very very well -- to HARD problems. It is notably weak on actual intelligence tests (my friend the genius used "mind reading" on one segment (aka anticipating what the tester would say before she said it), because he was so bad at the actual component being tested).
People are using IQ in a very confusing form here. What’s measured in tests is IQ. What’s that a proxy for is most often called g (ie general intelligence).
Ted Cruz graduated from Princeton and Harvard Law where he edited the Harvard Law Review. Base rate analysis suggests that his IQ is very high, whether or not you agree with his politics (and I do not).
It generalizes to *intellectually-accessible* hard problems. Not all hard problems are, in my experience, suitable to solution via thinking really hard. Most interpersonal problems aren't--in fact, thinking too hard can actively be a detriment in those.
Most of us can't multiple 5-digit numbers in our heads. Some of us can do it stepwise, using a learned algorithm. Most LLMs cannot reliably even add 5-digit numbers but can maybe pass the bar exam. A $5 chip can add, subtract, multiple and divide 5-digit numbers in microseconds.
A hard problem is relative to something. High IQ individuals are statistically better at problems that are hard for humans. "Dumb" animals beat us at all kinds of stuff but have very limited capacity to generalize, eg spatial processing to intercept prey doesn't enable geometry or calculus.
Whether returns to intelligence are linear are not is not really relevant to ASI but it is relevant to takeoff speeds. As long as AGI/ASI is possible and progressing relative to current tech trends, the returns to intelligence aren't relevant over historical timelines.
So, let's lay out a very basic scenario. Assume IQ works from a base 80 and scales logarithmically, so at IQ 80 we've got the equivalent of a dumb person, at IQ 800 we have something equivalent to the smartest person alive, and at IQ 8000 we have a low-level superintelligence. Let's also imagine there are no significant methodological improvements or anything, the IQ just advances in line with Moore's Law, doubling every two years. We basically just keep running the same models with more and more transistors and there's no feedback loops. And tomorrow OpenAI releases the world's dumbest proper AGI, at IQ 80.
So in 2029 the IQ of our dumb AGI is *320* and it's getting around average human level. In summer of 2032, it passes the smartest person ever level, and by September 2039 we have a true superhuman ASI.
And these are really conservative estimates about the returns to intelligence and it lacks any feedback loop, where Moore's Law doesn't speed up development when we have millions of AI agents as smart as our best minds working on better GPU units or something.
As long as AGI/ASI can grow relatively in line with software/computer growth/improvement, that will overpower any low returns to IQ just through compounding improvements in (historically) short timeframes.
Except asymptotics are asymptotic. In this model, you can't *ever* get above X% higher *no matter how much effort you throw at it*. Logistic is not logarithmic--in a standard scaled logistic curve you can't get above 1 (arbitrary units), you can just get arbitrarily close.
And I see no evidence anywhere that self-improvement via ai is actually meaningfully possible.
I think the standard Bostrom answer is that since we don't see declining returns to IQ in subhuman intelligences, like bug->rat->cat->monkey->human, why would we assume some ceiling right around human IQ.
Yeah, if you look at the left side of a logistic curve, you see increasing returns. That's the whole point. Going from (conceptual) 10 -> 20 -> 30 shows improved returns. But logistic curves[1] always turn over. And my experience has been that there is already diminishing returns at "human scale" just going from normal -> smart -> really smart.
[1] The number of increasing faster-than-linear, positive feedback-loop processes in nature is really small and carefully constrained. I see no reason to believe that intelligence is one of those. Most of them are logistic instead. So the default assumption is that it's logistic.
Your personal experience is on the wrong scale because it's on a human scale. The dumbest human is still in the top 99.999999999999% of intelligence of all living beings in earth history. The median IQ on a scale of all living entities ever isn't a dumb person, it's like a frog or something. We do not begin to see declining returns to intelligence as we go from frog to lizard to cow. Your observations about declining returns to intelligence are focused on the extreme right end of the distribution.
No one is going to look at human intelligence, which in the evolutionarily trivial period of 100k years has become so dominant that we've become an extinction event for other species on par with the dino meteor and go "Yes, we have clearly reached diminishing returns in intelligence." If you keep it to human scale though, I will concur, I often see minimal personal returns to IQs over like 130. That's real. But the right scale is all intelligence, not just human intelligence. And a graph of all intelligence is not a logistic curve, it's an exponential curve at exactly human intelligence that collapses at exactly about as smart as human lawyer and would look absurd if you actually drew it.
I'm skeptical about logistic being the default. I'd certainly cede that when there are finite resources at play, conditional on a fixed level of tooling/technology/effort/etc.
But the purported logistic curve for oil extraction got blown out of the water when we discovered fracking. I'm sure there's still *A* logistic curve under there somewhere, but it looks nothing like the one we imagined.
Same spirit, there might be resource limits on intelligence, and probably are on wetware like brains. But I wouldn't take a priori that those limits are the same for silicon.
In reality there’s a limit that is being reached as we speak in compute. Therefore gains have to come from elsewhere.
Transistors are nearly as small as they can get, training frontier models already consumes gigawatt-hours, and doubling that every two years would demand something like the output of a national grid. The cost of new fabs runs into the tens of billions, so the money is as much of a constraint as the physics or the power. Moore’s law was steady while it lasted; what we face now are plateaus, where real progress depends on smarter algorithms and efficiency rather than brute-force silicon or electricity.
(For context, GPT-4-class models are thought to have used on the order of several gigawatt-hours to train. If that demand doubles every couple of years, by the early 2030s you’d be talking terawatt-hours for a single training run — the kind of consumption that starts to match or exceed the annual electricity usage of a small country.)
WoolyAI’s answer seems to better address your question than mine. If you feel like the standard Bostrom answer fails because we’ve reached some asymptote, I can’t prove you out of it. But I can ask you to imagine a similar conversation between chimps about whether or not there can be an intelligence greater than theirs. When the idealist notes that chimps are smarter than bugs, the cynic might say “if you look at the left side of a logistic curve, you will see increasing returns”. They might observe that the smartest chimp can make sentences only slightly better than the average chimp. But they’d be wrong about a logistic pattern to intelligence. Or at least, the logistic curve that applies on the level of species is very different than the one that applies to a single species.
It still seems hard to see why, as a fact of nature, the curve for intelligence would happen to flatten out right at human level. It certainly isn’t hard to *imagine* how useful it would be to be even as smart as ten von Neumanns working together in a room, and no particular reason to imagine even *that* is the upper limit.
If you perceive a flattening in human intelligence, it might be because we have evolved for group behavior, and isolated far-right-tail individuals are hampered by the paucity of peers to collaborate with. Or that intelligence is a recently-evolved attribute of humans that is still associated with various other cruft in the genome (like lack of charisma?) that interferes with what the individual can accomplish. Or that our monkey-nature makes sure that when a peg gets too high it’s pounded down.
It might be that AIs created by training on human text/behaviors can’t ever exceed previously displayed human abilities. But again, it’s hard to imagine what kind of law of nature would guarantee that. A collection of only slightly above human-average AIs might be free of Dunbar’s limit, for instance, and able to cooperate directly with more peers, accomplishing more than a similar collection of human AI researchers could.
All of these answers seem to be arguments from incredulity. "Hard to imagine" isn't a convincing argument.
And the idea that "all life" is the right scale just seems to be assertion, rather than argument. It assumes that intelligence can be meaningfully generalized as a single scale running over very disparate creatures, which is a massive smuggled assumption/stolen base.
120 IQ is squarely in the range of midwits. The sort of people who don't use Quantum Effects in their protein folding calculations. For twenty years! That's twenty years of research down the drain, and this is not exactly a sign of "actually high intelligence." Many years of stupidity, of PhDs showing a complete and utter lack of understanding that "maybe we're doing something wrong"?
Chess is another midwit sport. They do not have very high IQ, as they are subject to trolling, in general, by people less good at chess but smarter than they are.
There are absolutely tons of highly charismatic "super smart" people. You don't know about them because they are either comedians (you can't tell me the good comedian isn't a shmart guy, because I know he is), or too busy doing ten different jobs under ten different pennames.
Assume you lack the capability for metaintelligence. People more than about 10 iq points higher than you, simply look "absurdly lucky."
Again by definition a mid intelligence would be around 100. I’m not certain you understand the nature of the IQ scale, it’s a relative scale.
I think you are making up your own definitions here. It’s not that common for people at 1.5 SD above the median to actually believe any of that.
That said I do believe that there’s a step change for the top level of intelligence - which isn’t measured in IQ which assumes a bell shaped curve, but that’s hard to measure. By their fruits shall ye know them.
It’s not that radical to say we are all stupider than Von Neumann
Well indeed trusting the science is not itself a scientific idea, of course.
I’m still unsure who you think are the brightest people on the planet except for a few mediocre comedians and your friend who was joking about winning the Nobel prize for world of Warcraft.
Can you give an example of someone who is both highly charismatic and *by normal measures* super smart? Intelligent (ie over 100 IQ), sure. I'll buy that. But *genius level* (however you define that)?
Ad I think your causality is backward--you'd need to show that *every* (or *most*) geniuses are also super-charismatic, not that *some* are. What I'm questioning is that IQ *causally predicts* charisma. I believe that the two are actually uncorrelated above some relatively low threshold. Which would totally be fine with "there exist super smart, super charismatic people". But wouldn't be fine with "if AI gets super smart, it will therefore also be super charismatic".
Are you *sure* that those people are really smart? I see no evidence of IQ tests for them. I think you're letting yourself be biased by "is really good with words === is super smart". And that's exactly the thing under question here, so assuming that "good with words" === "super smart" is rather circular.
I'd be fully willing to accept that they're *above normal*. But genius level? That I doubt.
And you still haven't proven the real problem here, which is the reverse of this line of argumentation. That *all* high-intelligence people are *also, intrinsically* charismatic. And that I'm 100% sure is just flat false. Because I know plenty of people who have scored really high and done really great intellectual feats who *suck* at people-ing.
That feels like a bit of argument by definition. Sure, if you define "really intelligent" as "also really good at everything else" (rather than the normal methods of measuring/defining that), the problem resolves itself circularly. But that's not really a meaningful discussion.
I will also note that *thought experiments* are really easy to do even as a mid-wit. Actually proving it and doing the work (ie building the framework for general/special relativity) takes the smarts. So no, I'd not say that just doing that is winning a Nobel prize, even if a Nobel prize results. Or is evidence of being super smart in and of itself.
I think one crux is that we're used to assuming intelligence is more or less fixed. But in the context of AI, a mind is a piece of technology that can continue to be improved, and the rate of improvement can increase if you're throwing more intelligence at the problem
I’ll try. The Steelman would be that genuinely, along some dimensions, we should think that logistical increases in intelligence can lead to exponential increases in outcome. It just depends on what you measure.
I’ll analogize to sports. With respect to tennis aptitude, there are strong diminishing returns in number of points. Federer, the world’s best, only won 52% of points he played. And with respect to speed, the diminishing returns are obvious: his shots aren’t much more than twice as fast as mine are. But with regards to outcome/money/prestige, that tiny edge translates to world championship, millions of dollars, and ultimate prestige.
If an AI with a “small” boost in IQ knows enough to make a virus slightly better than Covid, why does it matter that improvements are logistic when millions are dead anyway?
Those sorts of sports outcomes are *binary* and *rivalrous*. Only one person can be the best, and the label itself, regardless of the actual performance, is what matters. Most things aren't that way, and that's not really responsive to what I'm asking here.
I'm not arguing that AI couldn't be *devastating*. But people manage to do that just fine by themselves. As does *unthinking nature* (cf covid itself). Intelligence is not load bearing there.
What I *am* questioning is the idea that you can get superhuman intelligence in all aspects and have it be *categorically* different than what we have now, to the point that we won't understand it. I suspect that it will, instead, be *smart*, but *understandably so*. And suffer from all the foibles attendant to any smart human (or at least the analogues of them). Including having to specialize.
Originally, I wasn't going to comment on this thread. Since you asked for dissenting opinions, whereas I made pretty much the same point as you a few weeks ago [0]. (Namely, that the ROI from intelligence has diminishing returns, depending on the environment.) (It's like a lock & key. The specificity of the key depends on the complexity of the lock. And the *value* of "unlocking the lock" depends on what the lock is guarding.)
Except, I do actually sorta disagree with:
> Including having to specialize.
Well, maybe not. The current advantage of LLM's is that they're able to read the entirety of the internet. I'm reminded of Dmitri's post about how LLM's should be regarded as SuperHistory instead of SuperIntelligence. And there've been discussions about how "data is the new oil" because LLM's might one day run out of internet. (I can't remember links, will look though.) [EDIT: who cares because there's a million hits on DDG.]
> To wit, I think most people have a pretty terrible understanding of what IQ means, even here. For one thing, it's an ordinal ranking. The notion of something like "linear returns" is a bit sketchy.
Thank you for writing this, so I didn't have to write the same thing. At best it is a metaphor, like "look at the differences at human intellectual performance between IQ 100 and 120, or 120 and 140, or 140 and 160, and kinda try to approximate this difference beyond the human range".
It is also important that *potential* ability is not the same as *actual* ability. Actual ability requires time to study and practice... and even the smartest human on the planet only has 24 hours a day. Perhaps they could become the world's greatest chess player, or the world's greatest politician, or the world's greatest expert on quantum physics, but they definitely do not have time to do all of that (plus many other things that are also potentially within their reach).
But this limitation does not necessarily apply to artificial intelligences, which can simply do things faster and remember better and maybe scale up if necessary. Consider that the LLMs are already experts on *everything*. Sure, they make mistakes and hallucinate etc. But still, they can talk, at a certain level of quality, about million different topics, most of which I know nothing about.
Even this...
> In fact, I've often seen *regressions*--people who are really smart often struggle to talk meaningfully to "normal" people and fail to connect to how they see things. Which suggests to me that IQ starts losing a lot of its punch the higher you go. And may actually correlate with *reduced* performance in other aspects of life.
...if fundamentally a problem of time and attention. The people who struggle to talk meaningfully to normies probably have the problem because they don't practice this skill enough. Psychology or rhetoric or whatever is simply yet another thing to study that competes for your time with the other things you study.
> Even *within* their broader specialty (e.g. physics), a genius at quantum mechanics isn't better than most smart grad students at, say, general relativity--the skillsets and knowledge base are too different. And they're not good at all at, say, organic chemistry.
Again, specialization requires time, you can't specialize on everything. The genius at quantum mechanics probably has enough intelligence to master organic chemistry, but doesn't have the time.
Yes, comparing hypothetical things like "the smartest one among 10^50 humans" and "the smartest one among 10^60 humans" is not helpful.
I can't even guess whether the difference would be very *small* (both are them are already using the biological potential of human brain to 99.999999...%, and the extra 9s have diminishing returns) or very *large* (both of them are some freakish mutants with magical powers, and the latter has even stronger magic than the former).
And if human brains have a biological limit of how smart they can get, then the values for "smarter than that" are undefined on the IQ scale, and we would need a new way to express it for the superhuman AIs. (Which ironically reminds me of how the definition of IQ had to be changed from "mental age divided by physical age" once we considered humans that are smarter than the average X years old human for any value of X. Now this would be kinda similar, for AIs that are smarter than an X-percentile human for any value of X.)
For a biological human, once you are sufficiently smart, *time* is the bottleneck. (And other things, like conscientiousness or wealth, but they kinda indirectly influence how much of your time you *can* and *will* actually spend studying the things.)
I like fiction that aims at, and mostly succeeds, in capturing a time, a place and a culture.
John William De Forest described the "Great American Novel" as a book that would capture the "tableau" of American society, "paint the American soul" and capture "the ordinary emotions and manners of American existence".
Tom Wolfe's The Bonfire of the Vanities (1980s) might be the best example, but I'm not interested only in American fiction.
Our time (roughly 2008 global financial crisis to now, through Trump/Brexit; wokeism; the pervasiveness of the right/left dichotomy; smartphones / social media / techno-globalism; the pandemic and pandemic response) deserves a literature.
I don't know of any post-2008 period fiction, unless you want to count Stephenson's _Reamde_, which spans from the American Northwest to the coast of China in a world with a World of Warcraft-killing MMO.
Generally, I can recommend Amor Towles. Listened to _The Lincoln Highway_ on a road trip (fitting!); it paints a vivid picture of the US circa 1954. He has two other novels and a series of short stories (_Table for Two_), all appear to be period fiction, all well-received.
I see nobody has mentioned "An Absolutely Remarkable Thing," which was described (in 2018) as "the best book I've read about what 'now' feels like." I'm partway through and quite enjoying it.
Depending on your preferences and standards, you might really enjoy Guy Gavriel Kay.
Kay is a fantasy author, but he's a very particular sort of fantasy author. His MO is picking a particular time and place in history, researching it very thoroughly, and then creating a fantasy setting that very strongly mirrors its culture, geography and politics, while still leaving him room to tell whatever story he wants to tell.
For example, The Lions of al Rasan (one of my favorite books ever) has a setting very closely based on the Iberian Peninsula during the 11th century. Most of the characters are original, but one is significantly based on El Cid. The story and setting is deeply involved with the interplay between three main religious groups, which have fictional names and beliefs, but are clearly identifiable in the context of the setting as stand-ins for Christianity, Islam and Judaism (with one main character belonging to each). While it's obviously not a guide to the historical *events* of that time and place, it would take an author of rare talent to write historical fiction that was anywhere near as powerful at capturing the emotional experience of living through those sort of historical events.
His Wikipedia article lists the inspirations for his various novels (note his first works, The Fionavar Tapestry, don't follow this pattern):
I can’t tell if you’re just looking for 2008-present American fiction, but if not:
Gary Jennings’ historical fictions Aztec and The Journeyer are mind-bogglingly good. The amount of travel/research the author put into creating these novels is equally mind-boggling. Highly recommended.
Sorry for my lack of clarity. I’m looking for post-2008 fiction that captures the culture, but not necessarily American. Thanks for the recommendation. I haven’t read much historical fiction.
Patricia Lockwood (No One Is Talking About This) is a contender specifically if you're focusing on the feat of capturing zeitgeist and not necessarily on other qualities that make a novel 'great'. Although in my subjective, ill-educated opinion her prose is also very, very good.
John Grisham seems to have set all his novels (well, the ones I read) in the TN-MS-AL area, late 20th century (present day when they were published). They felt reasonably immersive, assuming one enjoys trial lawyer drama.
I guess "The World According To Garp" hits several of those for... 1978? The Ellen Jamesians certainly capture the insanity of radical activism, and it features the line "there's no sex like trans-sex", so like... that's a recurring topic.
It sure does for 1978! Keen to find something for post-2008 world. Hasn’t been many I think. Someone elsewhere suggested The Mandibles and Mania by Lionel Shriver (from 2016 and 2024 respectively).
Edit: I’m sure it’s true (and always will be) that there’s no sex like trans sex.
I would recommend The Name of the Rose, by Umberto Eco. It's a murder mystery set in a 13th century monastery, and succeeds to great degree at presenting an immersive and detailed view of a very foreign (to us) world, and how the people who inhabit it think.
Also one of the main characters is a proto-rationalist who applies skepticism and empiricism to try to solve the mystery, so it may be of special interest to readers of this blog.
Most of Eco's novels do something like this -- Baudolino for the 12th century, The Island of the Day Before for the 17th, and The Cemetery of Prague for the 19th. Part of their effectiveness, I think, is in mixing the themes and conventions of that era's storytelling in with the historically accurate details -- respectively, bestiaries and fantastic travel tales; cosmology and picaresque novels; conspiracy theories and feuilletons.
I would recommend reading Narcissus and Goldmund, by Hermann Hesse. It's set in Medieval Germany, and is kind of about the relationship between this Catholic monk (Narcissus) and a wandering, free-spirit character (Goldmund) as he searches for the meaning of life. It's mostly about this Goldmund character and follows him from a young age to late adulthood.
It's not modern or American (published in Germany in 1930), but it's the first book that came to mind when I read "aims at, and mostly succeeds, in capturing a time, a place and a culture".
All of Hesse's other major novels are also fantastic, "The Glass Bead Game" being the best. But that one is set in a speculative future and doesn't capture anything that really exists. It does feature three short stories at the end of it, though, that the main character "wrote" as he imagined himself living in different countries during different periods of time in the past.
Haven’t read this. Did read Steppenwolf recently and will revisit that again and again. IT feels relevant to this point in time (even though it was written ~100 years ago).
Steppenwolf is also a favorite! Honestly I haven't read a book by him that didn't deeply affect me. His personal journey and thinking are somehow all very relevant today.
I've been reading a different non-fiction German writer whose stuff is from the 1930-40s, and his stuff directly describes a lot of what goes on in America today. Now that I think about it, some early Nietzsche stuff that I've read does as well, and he was even earlier... Is America just Germany 2?
Also sorry, I misunderstood your request. I thought the first sentence was what you were looking for and the last one was an afterthought rather than further specification.
I really felt like I got my money's worth. I wouldn't spend 3 hours watching itnat home, but for me spending 3 hours watching it in a theater was satisfying.
I don't think there is a bigger (even political) message. It sort of reminded me of a slapstick superhero movie or something, but with actors I love watching.
I thought it was decent! After the first 30minutes, I wasn't expecting much, but it picked up the pace after the time skip. I agree that the characters were cartoon/stock characters, but funnily enough found it to be one of the movies strengths. Different preferences I guess?
The car chase scene was especially enjoyable, a real 'thriller' moment. Other than that, it was also more political than I would like, and wish they would have left the ending more open.
LOL I thought the car chase was interminable. I guess that's what makes horse races.
But, here is a question: Was evil cartoon guy actually chasing stock teenage girl character, or simply making his getaway and happened to be on the same road? He had no reason to think that she had escaped, after all.
>it was also more political than I would like,
I don't mind a film being political. The problem is that the politics was facile. There are real issues that could have been explored, re political violence or re immigration or re law enforcement, but nothing was.
>and wish they would have left the ending more open.
Yeah, and "bad guy gets comeuppance / father and daughter draw closer / daughter carries on parents' political advocacy" was not exactly an unexpected ending.
I agree that the chase doesn't make sense and the politics was facile. I watched it as I would watch a Tarantino-movie: over the top characters do over the top things, but it is still kind of entertaining. Might not be what the producers intended, or what the people you have heard praising the movie are saying.
I have a friend who loved it, because he thought it was a spoof. I certainly don't think that was the intent. I would love to see an Armando Iannucci version of the film.
>I watched it as I would watch a Tarantino-movie: over the top characters do over the top things
I can't say I am a huge fan of much of Tarantino's recent work, other than Once Upon A Time in Hollywood. He has really wasted his talent IMHO. And that certainly wasn't what I expected from a Paul Thomas Anderson film.
ACX 2024 grantee here with an update on the EEG entrainment project.
The study “Learning at your brain’s rhythm: individualized entrainment boosts learning for perceptual decisions” claims that entrainment (flashing a bright white light) at a person's individual peak alpha frequency (IAF) helps them learn to distinguish two types of patterns faster. I'm replicating this study.
Three weeks ago I provided an update on an Open Thread (https://www.astralcodexten.com/p/open-thread-398). I now have a video of the demo of the coming replication from the latest ACX meetup in London: https://www.youtube.com/watch?v=pP5dO97l9Bo. In the video you can actually see people getting their brains entrained and solving the study tasks while wearing an EEG headset. You can also see some charts pointing towards the effect being actually real as well as learn more about what the study did and why I am very optimistic about it actually replicating.
I’m looking for 10 in-person participants in London willing to dedicate 4 hours of their time for the replication. If you want to volunteer — please sign up using this form https://forms.gle/X37zyTV3KhbSb3Ze9, your help will be greatly appreciated. From the previous update I have 11 sign ups but only half of the people actually confirmed their participation over email, so I'm still looking for more participants.
I’m also looking for 15-20 remote participants with their own EEG hardware willing to be remote participants, run the replication on themselves and provide me with their results. To volunteer — sign up using this form https://forms.gle/G971tuMUfGqywEG38. For making this effect into a production system that helps people learn it would be amazing to first see if the effect replicates on a variety of different hardware .
I wrote about self compassion, perfectionism, current trends in the tech industry, and how to cope with the diminishing prestige of the software engineering profession. "self compassion and the disposable engineer" https://dlants.me/self-compassion.html
I enjoyed your essay a lot. On the topic of "Hard Work": Graham mentioned that people with both talent and the drive are rare. I think people underestimate the probabilistic forces at play, in the sense that if Bill Gates didn't have both talent and drive, he'd be somewhere else in life. Like you identified, our culture tries to glorify these people and many wish to emulate them. So there is a certain tension between what is already there in people and what people want to create within themselves.
I think it's important to be able to strike the balance between the two poles: On one hand, striving to do your best can be good, but on the other hand, we all need to accept our limits with compassion. Trying to be something one is not will lead to unhappiness while only accepting what is already there results in no progress ever beeing made.
I struggled a lot with this essay because there was a lot I wanted to say, and I ended up cutting many things to focus on the emotional self compassion and disposability message.
A future essay will cover the question of hard work and striving. I think we often use physical language to talk about mental exertion - gritting our teeth, putting our head down, etc. but the picture around mental exertion and exerting your will is really murky.
There's a lot of things that have to do with executive function, and some that have to do with motivation. In all of the literature I've read, it's not clear whether there is such a thing as a conscious exertion that improves your performance at the given task. In many cases, trying harder actually makes your performance worse.
It seems there are some cognitive processes that can be helpful around motivation and behavior change (as I linked in the stronger by science article), but little of that ends up looking like the common image "working hard".
Personally I've adapted a philosophy that is summarized as "gentle re-engagement".
So yeah that will be a future essay, I think it's interesting stuff.
Personally I like the taoist philosophy of wei wu-wei (doing not-doing). It's similar to what you call gentle re-engagement, but leaves out the whole thought of re-engagement out ;)
I'm probably not practicing it correctly (afterall I'm not a taoist sage), but in my understanding, it's mostly about doing what arises in the moment. This is very unhelpful advice for most people, since what people typically perceive as arising are things they consider vices, but if you are able to be gentle enough with yourself (and chewed the whole eastern philosophy pill long enough) what comes up... turns out to be useful things. Because if you think about doing your taxes, you'll just start doing your taxes. If you think about watching a movie, you just watch a movie. If you keep thinking about things that you currently can't change or are not appropriate, you're simply not in the moment enough and too distracted by the illusions of life for wu-wei...
I enjoyed the essay, thank you. I always felt my own struggle with perfectionism, as a software developer, comes because the job does demand you to find perfection of sorts: you always have to be on guard against introducing bugs into the code. You cannot be an optimist as a software developer, that just means things will break.
I never felt like I struggled with feeling like I have worth because of my job, because on my end I treat companies as disposable myself. I'm in this job because it's fun and it pays well, I don't especially care about what my current employer thinks of me. Then again, maybe that's privilege, I'm only doing full stack web dev, so not a high pressure job, and I may be underestimating how difficult it will be to get another job if my company goes under or I somehow get fired (which I feel is unlikely).
I think feelings of low self-worth for me are more driven by a feeling that I don't know how to get on with people, but I think this essay touches on that too, with the general theme of self-compassion.
My employer introduced a High Deductible Health Plan option this year, but in direct contravention of everything I thought I knew about HDHPs they priced the premium higher than the HMO premium. Specifically, for individuals the HDHP premium is $20 higher per month and the only "perk" is a $250/yr HSA seed. So if you opt for the HDHP you net out $10 ahead for the entire year, at the cost of having to pay ~1000% more out of pocket if you ever consume any healthcare.
This seems like such an obviously horrible deal that I can't imagine anyone taking it, but if that's the case why did they bother introducing it in the first place? The open enrollment email alludes to vague "tax benefits" associated with having a HSA, but is there any possibility that those are good enough to justify opting for it?
As a young man in perfect health I'd love to have a real HDHP option that cost less in premiums than the HMO, so I'm trying to understand why they would have set things up this way. The email said that they had "heard employee requests for a HDHP" but introducing one structured like this seems like it's worse than just not introducing one at all and continuing to only offer us HMO and PPO. It feels like if the cafeteria said "we heard your requests for ahi tuna, so we've added some cat food to the menu". Does anyone have any insight on a possible explanation other than this being an institutional "fuck you" to the HDHP-wanters?
Thats pretty aggressive. The HSA has tax benefits that make it worth it for a person who doesnt consume much health care. But those benefits are given to you by the IRS not your employer. So by charging more, your employer is essentially trying to take from you a portion of the benefits given to you by the IRS. Usually an employer leans into tax advantaged ways to compensate employees. Here they are doing the opposite. Shrug
Never attribute to malice what can be attributed to stupidity. Probably the prices are just what a computer spit out when some insurance agent plugged numbers in. Maybe there are sound actuarial reasons for those numbers; I don't know enough about the industry to comment on that. But in the absence of other evidence, I wouldn't assume evil motives on the part of the employer.
Wow, I've never heard of that before. Usually, companies *want* you to take the HDHP, hence pricing it the lowest and making an extra HSA contribution.
The email also mentioned something about "balancing the risk pool". I work at a university so there's a bimodal distribution of old-ass faculty members and young staff members on the insurance, I guess they didn't want all of us staff fleeing to the HDHP and leaving the HMO pool full of sickly Boomers?
An HSA plan is a huge tax loophole but it's main advantage is as an investment, not as a way to reduce healthcare costs month to month or year to year. But if used correctly, contributions, capital gains/interest, and distributions are all completely tax free. Other tax advantaged plans like a 401(k) or a Roth IRA have one or more of these tax benefits but none have all 3. So basically you get to sock money away into the stock market and never pay any taxes on it at all, provided you comply with the rules of the plan.
The medical expenses you can spend the money on are pretty broad, dental and vision, stuff like chiropractic, acupuncture, prescription OR over the counter medicine, even apparently a gym membership or diet expenses if you can get a letter from a doctor that they are medically necessary? Feminine hygiene products, diapers, first aid kits, assisted living, etc. It's a long list.
But the real tax advantage comes because you don't have to take the reimbursement when you pay for those expenses, and you shouldn't. You leave all of your contributions in the plan, invested, growing tax free. And you save all of your receipts for eligible expenses and don't take distribution until retirement as you need the money, using up your "banked" receipts over the years. If you don't end up with enough medical expenses accumulated to withdraw the full balance then and distributions just become taxable at your current retired tax rate (assuming you are over 65). So it basically turns into a normal 401(k) plan at that point.
If you make the max contribution of $4,300/year for 40 years and invested with 5% returns is a total cash investment of $172K and gains of $347K, total HSA balance of $519K at retirement. If you get 7% returns you end with $686K in returns for a total balance of $858K. Hopefully you don't have that many medical expenses to reimburse but again it just becomes a normal 401(k) plan if you don't. So you can look at it as part of a primary retirement planning strategy that gets you completely tax free medical care for life if you play it right.
My HSA only has two investment choices: money market fund A, and money market fund B, which sort of limits the growth benefits. The reason for this choice is obvious if we're pretending it's for health expenses, not retirement. I might need it at any point.
I do count it as "cash" investments in my overall portfolio which lets me a little more aggressive in my other vehicles.
On the other hand, while my plan has a web form to upload receipts, there is no need to make my withdrawals correspond to them, or even be limited by them. I'm over 50 so maybe it's just assumed I can have whatever medical bills are necessary.
You can get a different HSA. The HSA account is between you and the IRS. You're not required to use the one that the plan suggests (other then you'll prob. have to have it for any money the company kicks in). Fidelity offers HSA plans that you can invest in whatever (with no fees, prob. as a bit of a loss-leader, the kind of person who sets up an independent HSA is prob. the kind of person Fidelity would like to have a relationship with).
It makes sense, but only in conjunction with a Health Savings Account. An individual can contribute 4300 each year, pretax, to their HSA, which can be invested and grow tax free; any distributions for health care are also untaxed.
So, its a good deal if you have money to put away in an HSA.
Alas, I do not, but thanks to all the explanations in this thread I do now understand why it might be desirable for other employees who have a high enough salary to minmax the HSA. Thanks everyone!
If your in-network provider list sucks (which it probably does with an HMO) and you have specific doctors you want to see that are out of network, then wanting the HSA might make sense. Probably doubly true if you have specific mental health providers you want to see.
The HSA Health Savings Account is an especially good retirement vehicle. Yes you are right to be surprised that this is why your health plan option costs more we have a really strange system.
You can put only up to a maximum amount in your HSA every year and the interest earned is tax free similar to a Roth IRA. But, the money you put into it is pre-tax (like a 401k). And any money you withdraw for health expenses is tax free. Money you withdraw later for non health expenses is taxed as income.
The people who were asking for the HDHP almost certainly wanted it for the HSA access (which by law is only available to those with a HDHP)
You should get a different HSA account provider. The HSA is between you and IRS, you don't have to use the one the company provides/recommends (other then for any money they kick in). You can setup an independent one (Fidelity for one offers such accounts with no fees) and fund it independently rather then through payroll deductions and it will all net out the same on your tax return.
You just put money in. You do have to have the right kind of health insurance plan (high deductible, HSA compatible) and there are limits etc. But the tax saving part all gets calculated on your next tax return (which is actually what's happening with the company offered plan under the hood). Much like an IRA vs. 401k. You will get reporting from the HSA administrator on your contributions and distributions, you will report your contributions (the company plan does this on the w-2, but that's just a convenience rather then a difference in treatment) and this reduces your taxable income and thus your total tax bill. If you didn't otherwise adjust your withholding (w-9) or estimated tax you'll get a refund, but you can adjust those, if you like, to pay less tax through the year rather then get a refund at the end.
“This seems like such an obviously horrible deal that I can't imagine anyone taking it, but if that's the case why did they bother introducing it in the first place? The open enrollment email alludes to vague "tax benefits" associated with having a HSA, but is there any possibility that those are good enough to justify opting for it?”
I assume the HDHP still works out to your advantage. The tax benefit is that the money is tax free, the HDHP allows investment vehicles that you allocate your money to, you park it there for decades while it grows and you are hopefully not consuming healthcare.
In addition, if you ever have unreimbursed health care expenses, you can keep your receipts and potentially decades later you can be reimbursed for these expenses, which ends up being g tax free income to you.
Think of it as buying access to another retirement vehicle at the same time you have a health plan.
The "AI as normal technology" series of posts challenges the idea of intelligence being a really large spectrum. The standard metaphor is that the "village idiot - Einstein" gap is a tiny segment in the spectrum of intelligence, akin to visible light compared to the entire radio-magnetic spectrum. They object, claiming that a lot of important domains naturally limit the power of intelligence.
This led me to think about the limits of intelligence in chess. The gap between an amateur and a grandmaster is similar to the gap between a grandmaster and a modern engine. A GM would win against an amateur with ~100% rate, and an engine would similarly wipe the floor with the GM. Naively, one would assume the chain goes much further.
But is it true? I claim (and would appreciate feedback here) that the chain basically ends there, assuming chess is a theoretical draw. There is likely no super-engine that would decisively beat Stockfish 17. Moreover, it is likely that even the game-theoretical optimal oracle would still draw against SF17 most of the time.
The key idea is that chess is a game of understanding and calculation, but both lose their usefulness beyond a certain level. One needs to understand which positions are more likely to win, based on general features of the position (material, piece activity, long-term weaknesses, etc.). But for any pure (fast) "understand" function, there exist very similar positions that differ only concretely; so one also needs to calculate to distinct between them. In other words, there is only so much to abstractly "understand" about any given position; hence calculation is used to complement understanding. But calculation also fizzles out in usefulness after a certain depth. Indeed, for calculation to yield new insight at, say, depth 20 rather than 19, the position must remain ‘forcefully sharp’ for at least one player all the way to that depth. Such cases exist, but are rare, and if the smart (with good understanding) player wants to avoid them, they mostly will.
In other words, superior chess understanding and superior calculation will get you only so far. To secure a draw, one needs to understand and calculate just "well enough." I think modern engines have likely reached this level. They mostly draw against each other. The famous AlphaZero vs. Stockfish 8 result appears to prove the point rather than to contradict it (they mostly drew within a 50–100 Elo performance margin).
So the game of chess appears to have some "irreducible complexity" baked in; this makes intelligence usefulness diminish, so even a relatively dumb intelligence can be good enough to secure a draw.
This kind of idea is pretty common in video games, it's called a "skill ceiling", the point at which more skill can't translate into better results in a game. Players usually like games with a high skill ceiling so they can keep getting better at the game if they want to, and will sometimes mod a game specifically to increase the difficulty and accomplish this. And when they can't compete on skill, they compete on speed instead.
3 points that don't really answer your question but may be useful:
1. Asking this about chess is somewhat inconvenient because we know that if there is a limit, it is beyond human ability, so it's hard for us to visualize what the limit would be and our intuitions may not be reliable. You could simplify the question by asking this about simpler deterministic games, like tic-tac-toe; we definitely know there's an upper limit there. Given that we know there's a class of games where there is an upper limit, is it theoretically interesting whether or not chess falls into that category or not? Or is that just a piece of trivia about chess, and the point is that the category does exist?
2. My extremely limited knowledge (please correct me) of the history of chess is that whenever the state of play has evolved to the point that a winning strategy was found and play has stagnated, people just introduced rule changes or additions, or new game modes like timed moves, to open up the strategic space and make innovation and variety possible again. I would assume the same thing could be done with GAIs playing hyper-chess; if the humans invented something too simplistic to be fun, just patch it until it's fun again. I feel like this reveals a problem with the premise of the question; yes, anyone can make a toy example where there aren't enough levers and moving parts for intelligence to matter past a certain point, but anyone who is too intelligent for that toy can discard it and do something else instead. The idea that, if a toy model can be too simple that way, *reality itself* can also be too simple that way, is a big jump, especially when our toy examples can always be expanded to allow for more intelligence.
3. Which brings up another point, which is that one way to win a game of chess is to shoot your opponent. Or just restrain them until their timer runs out, or do hacking/neurolinguistic programming on them to force them to make errors, or lobby/manipulate the judges to hold the game at a time and place where your opponent will be suffering from jet lag, or etc. You can say 'but we set the rules of chess to mean that doing any of that is illegal and counts as a loss', but then the win state is either to hack your opponent into doing those things to get a loss, or hack the person setting the rules to change them. This is again attacking the premise of the question: yes, you can create a local context that is so simple that intelligence can't help you, but much of the point of intelligence is jumping out of contexts. Sure, this lion has bigger teeth and sharper claws than me, I can't win the fight in that context; But hey, what if I bring in the context of obsidian and sticks, now I have this cool atl-atl and the lion is dead of a spear through its head before it even scents me on the air. To the extent that the argument is AI won't be scary because its ability to affect systems through more intelligence is limited by the complexity of the system, it can always step outside the system and apply the full complexity of all of reality to the situation if it wants to.
1. One motivation to ask questions about chess is that it is a often (rightfully) cited as an example of where super intelligence is out there. AlphaZero vs Stockfish famous match is also super impressive since it seemed to suggest that the superintelligences can _dwarf each other_! Which is scary and supports the idea of recursive self-improvement, foom etc. The common objection is "but chess is a closed domain with limited rules, so it is not surprising that super-intelligence is scary there; real life is messy and open ended etc". I just thought this specific objection is not really on point, and the chain of dwarfing intelligence may be actually quite short for chess. I don't think this is commonly brought up! For tic-tac-toe the chain is obviously trivial, but it is also qualitatively very different from chess (no combinatorial explosion).
2. > "The idea that, if a toy model can be too simple that way, *reality itself* can also be too simple that way, is a big jump, especially when our toy examples can always be expanded to allow for more intelligence.".
It is a huge jump and this is what I find the least convincing in "AI as normal technology" write-ups! I am not sure about game of Go even. I just wanted to start with something toy-ish, but still non-trivial, and see where can we go from there!
In reality, however, there are domains that resemble chess. One example is weather forecasting which does feel similar to chess. To produce a 5 day forecast, one can use "general understanding" (models) and concrete calculations, but try doing the same for 2 weeks and chaos theory makes it futile. The weather might be simply too chaotic for 2 week horizon.
3. I agree with all of that. Yes, there are domains where more intelligence is a game changer, and domains where it is already saturated and all needed intelligence is already discovered.
I think it is crucial to understand what kind of domains belong to each category, and why. Since "reality as a whole" includes all domains, it belongs to the former category.
What I am not sure about is if the domains where the intelligence is not that important are just too simple. Not sure if this is the only or the main reason. E.g. long term weather forecasting is likely uncrackable not because the weather is simple, but because it is too chaotic. I think the chess is also "too chaotic" - otherwise the chain of intelligences would have been longer!
If the main reason the intelligence is limited is this chaotic property, it is an important insight.
Lion and spears and Spaniards and Aztecs are super important counter-examples. In those cases, reality was calculatable/understandable enough to achieve decisive advantage.
So I would greatly push back against "the world is messy, so AI won't be scary". The messiness does limit the intelligence in important ways though, and I would love to understand about it more.
From the standard starting position, engines today can probably force a draw against perfect play. I'd call this a weak solve (abusing mathematical terminology and substituting any notion of "proof" with "looks true from experimental evidence"), but from an arbitrary position, my instinct is there's still considerable room for improvement.
Afaik, in chess engine championships, it is already the case that the engines only play from given starting positions that give white an advantage, to avoid every game being a draw in mostly the same position. That would line up with it being functionally solved.
Then again, before the advent of LLMs, engines already were stronger than GMs, but LLMs improved on those engines in unforseen ways (not just brute calculating lines, but playing more human-like on top). And engines are improving still.
I would probably share the intuition that chess is a theoretical draw, and that there are diminishing returns, but I am less sure we already are seeing levels that could draw even against magically perfect play.
> but I am less sure we already are seeing levels that could draw even against magically perfect play.
I am not 100% sure either. It is also helpful to define "magically perfect play" better.
For the god-given GTO engine with access to the oracle (giving "-1:0:+1" true score for every position), there could be several options with varying level of trickiness:
a) If the current position is a draw, the engine chooses between the draw-preserving moves randomly or a similar silly heuristic (e.g. the move that goes alphabetically first in the standard notation).
b) The engine is playing in some sort of aggressive "play for win" style, choosing the sharpest and challenging lines possible, with some definition of sharpest and challenging (what would this be?)
c) The engine knows everything about the opponent's (human-developed) engine, and can magically choose the line that will be the most challenging for this _specific engine_. E.g. exploiting weird idiosyncrasies in the engine's eval function - with oracle-like efficiency.
d) The engine can bluff, meaning it can play the losing move, knowing the refutation would require precision beyond the vision horizon of the opponent (with or without knowing the specific opponent).
For the human-developed opponent's engine: there are couple of options either:
a) It could be specifically designed or tuned to play for a draw (knowing the opponent is GTO); selecting the most dry and drawish lines.
b) It could use randomization, choosing between similarly evaluated positions (to defend against (c) exploitation.
The answer might depend on those options. Assuming the human engine implements (a) and (b), I'd probably still bet on a draw against god's (a) and (b). Not sure about (c) and (d).
Yes, there is a difference between GTO and exploitative play.
- GTO is defensive. It reduces your mistakes to zero, which means there's zero opportunity for the opponent to punish you.
- Exploitative play is aggressive, in order to maximize your score against a weaker opponent. Against a strong opponent, they might punish you, which means the exploitation would backfire. But if you can determine that your opponent is likely to let overaggression/blunders go unpunished, you can make riskier plays and reap bigger rewards.
- Similarly, there's the concept of "metagaming". Which are the rules and behavioral patterns outside the explicit ruleset. Metagaming is often interpreted to mean something like "best practices" (e.g. always control the center). But it can also include making "reads" on a specific opponent's idiosyncracies, or perhaps the opponent has a reputation for a certain style, and so you react/prepare accordingly.
> The engine can bluff, meaning it can play the losing move, knowing the refutation would require precision beyond the vision horizon of the opponent (with or without knowing the specific opponent).
Fun fact! This sounds like what's known in chess as a "Tal Move". I.e. an aggressive move which is technically suboptimal, yet extremely challenging for opponents to navigate. Named after Latvian GM Mikhail Tal, "the Magician from Riga".
A theoretically perfect engine could be both GTO and exploitative, masterfully exploiting the opponent's idiosyncrasies but still not going into "theoretically lost" territory, remaining in the "draw" area. I suspect this would quite limit the options though, compared to Tal's style exploiter.
Thanks for the "metagaming" - nice term. My favorite example of metagaming is from a game with a Tal's stylistic opposite. In a well known Karpov vs Miles game, Karpov could have made a Greek Gift bishop sacrifice, which in that position was unclear if sound or not and required calculations. Miles could have defended against a potential sacrifice, but did not. Karpov did not go with the sacrifice and ultimately lost the game. After the game, Miles was asked why did he allow the sacrifice - did he calculate that it was unsound? "Oh, no, I did not calculate it at all, I knew Karpov would never go with this line", - Miles replied!
You could exploit both ultra-aggressive and ultra-calm style!
> A theoretically perfect engine could be both GTO and exploitative
Eh... GTO and Exploitation tend to be mutually exclusive. The point of GTO is to play maximally defensive. This doesn't mean that you shouldn't apply pressure and probe for errors, but it does mean that you should avoid strategies that have systematic weaknesses. (E.g. attacking with your queen too early can leave her vulnerable, which means having to move her twice, which forfeits development.) Whereas in order to be exploitative, you need to be willing to take bigger risks proactively. E.g. with gambits, you often sacrifice material/chain-integrity for faster development.
A perfect engine could theoretically switch between different styles of play. But you can't play GTO and Exploitative simultaneously. Instead, there's often going to be a pareto frontier of risk/reward, and you have to choose among different trade-off profiles, with GTO being the safest option (like the bond market).
> "Oh, no, I did not calculate it at all, I knew Karpov would never go with this line", - Miles replied!
You might be interested to learn that in poker, bluffing serves a dual purpose. Hollywood makes it seem like bluffing is only about winning the current pot. But bluffing also serves a 2nd purpose, which is to widen your range (in the eyes of opponents). E.g. if you bluff and someone calls it, you lose the pot. But also, the other players now *know* you're willing to bluff, which makes them more likely to call your raise in the future when your hand is actually strong. So even in GTO, it's important to bluff every once in a blue moon, just to keep opponents on their toes. Likewise, if Karpov had a reputation for a wider range of play, perhaps Miles wouldn't have been as confident in his read of Karpov.
> You could exploit both ultra-aggressive and ultra-calm style!
This reminds me of another point you might find interesting.
People often think of "aggressive/defensive" as a 1D continuum. But in poker, it's widely recognized that you actually need a Punnett Square to properly classify strategies. There's an "aggressive/passive" dimension and a "tight/loose" dimension. Aggressive/passive means "how frequently do you to raise, given a strong hand? (instead of check)" and tight/loose means "how easily do you fold, given a weak hand? (instead of call)".
Tight-aggressive is generally considered optimal. It means you raise on strong hands, and fold on weak hands. Which makes a lot of sense to me, because it highlights that good decision-making is *conditional* and *decisive*. In other words, it's important to recognize when your position is strong or weak, and to react accordingly. In contrast, it's common for players in strategy games to get into this mindset of "I should be more aggressive (by default)" or "I should be more defensive (by default)" when the correct answer is usually "it depends on the game-state". Though this can be hard to execute adeptly, since it relies on having developed a certain amount of game-sense.
I actually had another option in mind: I could imagine that a god-engine can avoid draws, but only if it accepts some risk of losing against the worse human-made engine.
In general, sharp and challenging attacking lines tend to be two-sided. You accept an imbalance in the position which might be exploited by both sides, believing that you got the better end of the deal. Those kinds of games are way more likely to result in either a win or a defeat, they are less drawish.
This is most closely to your option d), except it doesn't have to be a losing move, just a move that ultimately guarantees that one side loses, because both have to try and win for optimal play.
Then again, maybe this moves the speculation just up one level, where the question becomes if human-made engines are good enough to always avoid imbalanced positions that punish playing for a draw.
I guess my intuition is mostly based on what has happened these last few years. I couldn't have predicted that LLMs will blow traditional engines out of the water and win against highest level GMs with knight-odds, but here we are. Maybe the next major advance will be able to grant Leela knight-odds and still win. Or maybe there will be a ceiling after all.
> You accept an imbalance in the position which might be exploited by both sides, believing that you got the better end of the deal.
Thanks for the helpful intuition pump. If the god-like engine has god-level understanding, it also understands when the position just appears to be good for a weaker player. So it may be able to systematically find the positions when the other side makes borderline choices, and it takes one wrong choice to accept the losing bet.
But heck, I did win against the Leela queen odds two times; I built some intuition re the way it typically bluffs and it helped...
A follow up with informal evidence I just ran across:
Here is a popular youtuber partly focused on analysing games, mentioning that Leela KnightOdds tends to play suboptimal sometimes to gain an edge, as the correct follow-up is hard to find (timestamped):
> I guess my intuition is mostly based on what has happened these last few years. I couldn't have predicted that LLMs will blow traditional engines out of the water and win against highest level GMs with knight-odds
Nit: I think the odds-Leela are CNN's, the LLM's predecessors.
Ironically, my own intuition was also influenced by the Leela-odds, but in the opposite direction! I am quite a weak player, and I played against Leela Queen odds 30 times or so. It was a fascinating experience. But even more fascinating was when I suddenly won, and then in couple of games did it again! In both cases I managed to luckily call all the bluffs right, until Leela ran out of them and found itself completely demolished.
Which led me to think "well, if _even I_ can call the bluff"...
Honestly, I don't know enough of the technical details to know the difference, I may well be wrong on that. My understanding is also that current best engines aren't purely LLM, but use them in conjunction with other techniques.
My intuition on the knight-odds is purely derived from analysis of games against top GMs like Nakamura and Vachier-Lagrave, plus the underlying stats for the series that include that game. Maybe there are different versions? Or maybe you are downplaying your level? Or maybe it is down to time controls.
CNN (convolutional neural networks) and LMMs (large language models) are types of Deep Learning achitecture. CNN was invented before and is a natural fit for chess since it has geometry baked in the architecture (CNN was invented for machine vision). I am not aware if anybody serious tries to build a chess engine using LLMs, but would not be surprised if somebody did. Leela uses CNN last time I checked.
I think the difference between Queen odds vs knight odds is huge, they played knight (I think?) and I played queen. I played 10 min+5 sec incr rapid time control (and they plated blitz?). I am ~1560 lichess, so really not downplaying it :).
For queen odds, I initially tried to play pragmatic chess (like I would have played against a human in a similar situation). King safety, solid, slow, this kind of stuff. This turned out hopelessly badly.
Then I changed the strategy to mirror the Leela's craziness, aiming to keep proposing minor piece / exchange sacrifices. I knew I can afford two minor piece sacrifices and knew Leela knows that and mostly refuses.
With this strategy I was losing as well, but the way I was losing
felt much less hopeless (I was feeling I was really close several times), and eventually it worked out. Fun experience.
I think mathematically, "perfect play" just means "win from a winning position, do not lose from a drawn position," and I agree it's meaningful to distinguish between "perfect play" strategies based on how they do against imperfect play: how many drawn positions they win, and how many losing positions they either win or draw from.
> I think mathematically, "perfect play" just means "win from a winning position, do not lose from a drawn position,"
There are other criteria you might consider. You could have two strategies that each satisfy both requirements and consider one better because it takes 1% as many moves to finish the game as the other one does. In that case, "perfect play" would require winning in the minimum number of moves.
Drawing is more ambiguous; it's arguably better to take longer so that your opponent has more chances to mess up.
Yeah, just preserving a draw is a probably quite a weak criteria against a playing for a draw human engine. The human engine would propose the most drawish line at every move; it is enough for "a perfect player" side to accept once or twice to dry out the position completely. In the game of 40 moves, that should happen. So indifference to the style of play is likely a very drawish choice.
Being an quite weak amateur, I see it often when I practice endgame positions with stockfish. It is often OK with drawish lines. Against a good human I would have lost many endgames I am able to draw against SF.
Aren’t you basically saying that chess is functionally solved, even if it hasn’t been yet mathematically? You’re describing what I presume would happen if you put two Game Theory Optimal poker engines playing heads up against each other, and that has been solved.
Yes, and that's one of my major complaints about the concept of "superintelligence". Chess is functionally solved, so building an engine that is 1,000x "smarter" won't do anything. Maybe it will tie 1000x faster against a modern engine, but no amount of CPU power could make it win. But I think that most problems in life are just like chess, and in fact we might have similarly hit the limits already on some of them.
I think your last sentence is the crux you'd have with most disagreers, me included. I'm actually confused as to why you think most problems in life are just like chess, you seem to have such different underlying intuitions generating that claim I'm struggling to bridge the gap. My guess is we have different central examples of "most problems in life"?
Well, to use the trite example, the problem of going faster than the speed of light is solved, in the sense that we know it can't be done. But consider something more boring and mundane: how do you solve the problem of sitting comfortably ? The answer is going to be some kind of a chair. Sure, the superintelligent AGI could build some kind of a vibrating gel-based marvel-chair, but it's still going to be a chair, and it's not going to really surpass existing chairs all that much. Or consider portable energy storage. There are lots of different options there, with different tradeoffs, but for best energy storage density that is still acceptably safe (on Earth that is) you're probably looking at fossil fuels (be they natural or artificial). Obviously they have lots of massive disadvantages, and you can finagle the chemical tradeoffs in different types of electrical batteries, and maybe the AGI could improve battery energy density... but it won't be improving it by 100x or even 10x.
Most of our technologies, be they physical or social, follow the same pattern of rapid takeoff (lithium-ion batteries were a massive game-changer when they were first commercialized !) followed by diminishing returns -- and that includes LLMs, BTW. The standard AI-FOOM counter to that is to claim that the AGI is going to make some kind of unprecedented breakthroughs that we cannot possible imagine -- but that's using the same logic as saying "there are no contradictions in my holy scripture because God is just so far beyound human understanding, amen". Sure, you can say that, and that could even be true, but it doesn't get you anywhere in terms of being able to make operational decisions.
> The possibility of an Alcubierre drive has not been disproved.
This is technically true, in the same way that technically the possibility of a cross-dimensional invasion from a gateway on Phobos has not been disproved. It's hypothetically possible. Especially if you can get your hands on lots and lots of negative mass.
> The actual answer is going to be that sitting comfortably is an arbitrary artifact of human anatomy, and could be solved by modifying human anatomy.
Yes, lots of problems can be "solved" by not solving them.
> Why not?
Because chair designs have evolved over the centuries, to the point where they are pretty close to optimal -- just like chess engines.
> Lithium-Ion is a subset of the broader category of rechargeable batteries, which has existed for centuries by that point. You can't even follow your own argument.
True, but I was pretty precise in my wording. Li-Ion batteries were (arguably minor) game-changers in the way that earlier rechargeables were not. That said, it kind of sounds like you agree with my example ?
> I'll need a source on that.
LLMs were supposed to revolutionize all areas of business and/or put everyone out of work, but this hadn't happened, and actual businessmen as well as AI specialists are acknowledging that the "AI bubble" is about to burst. LLMs have indeed automated some baseline tasks, but this had not resulted in the expected dramatic increase in productivity. You can read this article for a summary, but there are many others:
> No, it's the same logic as "Humans have been making unprecedented breakthroughs non stop even without superintelligence"...
First of all, science output had been slowing down ever since science was invented (arguably in Ancient Babylon). Secondly, I cannot name a *single* breakthrough in human history, ever, that is in any way compatible with the claims the AI-doom/FOOM crowd is making. Not a single one. That's kind of the point: if I could name such an event, it would devalue their claims significantly, seeing as it already happened and yet we are all still here. THis does not mean that "everything has always been obvious from the beginning", but it does mean that the non-obvious things usually offer incremental benefits. In addition, note once again that no technological revolution had occurred in the area of chair design, nor do I expect one to occur, ever.
> The operational decisions stemming from an overestimation of a foe's capabilities have won wars.
This reasoning compels you to overestimate the danger of literally every single possible threat. So, what have *you* done today to prepare for the demonic invasion from Phobos ? Are you wearing cold iron to repel the Fae ? Have you eaten bacon lately, and if so, how do you plan on avoiding Jahannam ? And BTW, I am an interdimensional space wizard who will destroy the Earth unless you give me $100, so where's my money ?
Yes, basically that! The thing is I never appreciated the difference between "solved" and "functionally solved". In a theoretical sense, the chess is astronomically not solved. Not only there is no proof it is a draw. There is not even a proof that queen odds is winning (!), though practically we can be really really really 100% sure it is. There is likely no hope for proof of "queen odds is winning" in our lifetime.
If the chess is indeed "functionally solved", there might be some other important domains similarly functionally solved, or close to being solved. "AI as normal technology" suggest election forecasting and human persuasion.
"functionally solved" is a nice way of putting it. Essentially, the incremental ROI of additional intelligence goes asymptotically to zero, even if it is never quite zero (maybe an exponential decay?).
I wouldn't be surprised if there are a bunch of domains where existing solutions have "functionally solved" the domain (e.g. LED lighting is around 50% efficient - it might be that trying to squeeze out another factor of 2 isn't worth whatever exotic materials would be needed to do this) but also a bunch of domains where existing solutions are nowhere near "functionally solved" (I suspect most of biomedical questions are like that, given the intrinsic complexity of the system one tries to fix).
You might find reading about computers playing checkers to be interesting. The best human is/was much closer to the theoretical peak of performance.
And humanity now has a "solution" to checkers.
You might want to start here: file:///Users/mroulo/Downloads/1040-Article%20Text-1037-1-10-20080129.pdf
But the basic idea that once a problem has been "solved" being much smarter than the folks who have the solution won't help you is correct.
I don't expect to lose tic-tac-toe even to people much smarter than I am.
And for practical purposes (e.g. transportation routing) we might have "good enough" algorithms now such that even a perfect solution isn't much of an improvement.
I personally, based on little more than gut hunches and personal experience, feel that general intelligence is also subject to similar (maybe not identical, but similar) diminishing returns above some level. Sure, going from IQ 80 to 100 to 120 has meaning across the board. But 120 to 140 has less meaning, and in fewer areas of life. And I've seen little evidence other than sketchy analogies that suggest that the trend on increasingly narrow and small gains doesn't continue. Effectively, it seems to act logistically: big gains going from sub normal to just above normal, then flattening out.
Does anyone here know of a good deep dive on what it would take to fix the US healthcare system? I'm only ever exposed to the left-wing "corporations (and insurers, specifically) are bad", but something tells me it's probably not that simple...
I view this as evidence for the hypothesis that most of the badness comes from insurance creating a principle agent problem. When consumers aren't directly paying for services then that corrupts the ability of price signals to function properly. This is also evidenced by the fact that real prices in cosmetic surgery have fallen over the past 40 years. I don't think this is 100% of the explanation but I think it's significant.
I used to believe that insurance was the main cause of high healthcare costs in the US. I still believe insurance is a bad model for healthcare for the reasons you mention (and more), but after coming across RCA's work I no longer believe that the price effect is as large as I used to think it was.
It's a horribly complex & complicated system, built up from decades of path-dependent tweaks; most reforms are insufficiently humble about the predictability of their proposals' effects.
I would start only by removing the tax exemption for employer-provided health insurance. The historical accident that led to it is well understood, so we can safely remove the fence, Mr. Chesterson.
I suspect a surprisingly-large portion of the overall dysfunction is due to the tight coupling between specific employment & insurance. Give that change a few years to marinate, then reevaluate & take the next step.
---------
All that said, eliminating Certificate of Need laws would do wonders for affordability, and isn't nearly as systemic a change so should be low risk.
John Schilling has the bulk of it; a couple clarifications:
Things like allowing employees to move healthcare packages between employers (but still be employer-provided? I'm not sure what you're suggesting but I'll try to make good faith inferences) would increase the accidental (vs. essential) complexity of the overall system; there's no principled a priori reason for it to go through employment in the first place, and decoupling reduces complexity (which makes systems more robust).
Second, changing tax treatment should not be expected to immediately result in all employers discontinuing the benefit (especially given tings like union contracts); it'll take time for the shifted incentives to take full effect (hence my earlier "Give that change a few years to marinate…") by which time the individual market would likely be ready for its new customers.
It would lead to less employer-provided health insurance and (eventually) more privately-purchased health insurance. That moves "who pays" significantly closer to "who decides", which would eliminate some of the perverse incentives in the US health care market.
It also eliminates the problem of your insurance going away when you need it the most, because you are too sick and/or injured to work any more. We have workarounds for that, sort of, but they're ugly kludges.
As noted by the "eventually". there's a pitfall in this plan in that the private health insurance (and the realigned payscales that enable people to afford it) isn't going to materialize instantly.
1. Assuming everything is as the author believes, the state of healthcare it argues the US has is what most people would call broken.
2. It never argues directly that US healthcare isn't more inefficient than other countries.
When people say US healthcare is broken, they mean it's inefficient; we're spending way more money without improving health outcomes. People have different guesses of where the inefficiency is: some suspect wages, others suspect administrative complexity, others still blame more unnecessary medical tests and procedures, etc. But as long as it's wasting too much money somewhere, we say it's broken.
The linked article says it's true that we're spending more on healthcare without much improvement in outcomes, and this is primarily because of the diminishing returns of additional healthcare, and because Americans have worse life outcomes due to lifestyle factors, especially obesity and drug use [1].
Getting excessive healthcare for diminishing returns is what we call broken. We don't want a healthcare system that spends double what most other rich countries spend on healthcare for unnecessary medical tests and procedures. People rely on the healthcare system to tell them what kind of care they need. You might get a CT scan on the recommendation of your doctor. And even if it's purely the patient's initiative to get extra medical care, it's still a failure of the system if it's not improving health outcomes.
You can imagine a society where people, upon learning they have a terminal illness, commonly turn to alternative medicine quacks who take all their money and don't fix their illness. This may have been due to the patients' own choices, but it's clearly not satisfying their true preference of not dying. They don't have the expertise to choose the best treatment on their own. A good system would help them satisfy their true preference. The same principle can be applied to mainstream medicine: a good system doesn't waste tons of resources on ineffective treatments. This is just as bad as if the waste were in administrative overhead.
My second point was the article never directly argues that healthcare isn't broken (i.e. inefficient). It shows we spend more because we have more money and can spend more, but this is exactly what you would expect if healthcare were broken. It's not possible for poor countries to waste more money than they have available on healthcare. The closest it comes to arguing healthcare isn't broken is when it argues lifestyle choices are able to explain a decent part of the lower life expectancy in the US. But that doesn't show US healthcare is efficient. Even if you could explain 100% of the difference in life expectancy with factors unrelated to healthcare, you'd still be left with a country that spends twice as much on healthcare for no improvement compared to other countries. The points the article argues are all consistent with a broken US healthcare system that is wasting money somewhere.
[1] The drug use is interesting because the opioid crisis was started primarily by overprescription of opioids by the US healthcare system, which increased the number of addicts and in turn increased demand for illicit opioids. So it was in a sense also cause by a broken US healthcare system, but broken in a completely different way than whatever is causing the rising prices.
>It shows we spend more because we have more money and can spend more
That's not necessarily the fault of the healthcare system, though. If people have a preference for wasting money on placebos then that's not something you can really blame the system for. I have a suspicion that healthcare, like education, is an economic reflection of a characterological defect in the culture. We have an unrealistically naive set of expectations for both. With education, for example, we expect the system to educate *everyone* to the same standard. We can't accept that that's not possible because some people are stupider and lazier, so we blame the system or the teachers when those people fail. The teachers can't fix the problem and they can't be honest about it without risking their jobs, and so they participate in obfuscation like No Child Left Behind or eliminating racist testing. That does nothing but launder people's naive ideology to themselves and lets politicians grandstand that they're doing something when in fact it's nothing but a giant boondoggle. The parallel flaw with healthcare is that we think everyone should get the best care and have it be affordable. There is zero notion of "sorry but you don't get the million-dollar surgery that has a small chance of extending your life by 3 years because you make minimum wage and that would represent a deadweight loss to society". So we offload that conflict to insurance companies and they predictably take our money. They're not actually in the business of improving health outcomes, they're in the business of making us feel like we've solved the problem of healthcare costs.
We can't face harsh realities as a culture and so we sweep them under rugs. I'm not sure it's fair to blame the system for that.
I think I addressed this in my last comment. If your doctor tells you to get an MRI, you get an MRI. The general public is not equipped to figure out if a test or treatment is effective enough to be worth the cost. We rely on the expertise of medical professionals.
Every healthcare system has to deal with this one way or another. At the end of the day, a healthcare system that's inefficient because it provides too much expensive and ineffective treatment is just as bad as a healthcare system that's equally inefficient due to administrative overhead.
'this is primarily because of the diminishing returns of additional healthcare'
Yes and no. It is primarily because Americans are so much richer than everyone else, that they spend more on healthcare, and additional spending on healthcare is marginally less useful. The key thing here is that if other countries were as rich as the US, they would also be spending as much on healthcare! And indeed, when countries get richer, they move along pretty much the same path that the US has moved along regardless of whether you would regard their system as broken or not. See the animation on this link
Of course richer countries generally spend more on healthcare than poorer countries. A trendline on a graph showing this shouldn't change anyone's position much, since everyone already believes this to be true.
And again, importantly, the articles you linked never show that the US gets better outcomes for its increased health spending. You could use all the same charts from the article to argue that the US healthcare system is one of the most wasteful. Everything they show is also what you would expect to see if it were waste. They show richer countries spend more, but they never show the extra spending isn't going to, for example, administrative complexity.
US healthcare spending also looks high even for its high income. According to this data [1], the US spends about 1.8 times more on health per capita (PPP) than Canada, despite only having 1.3 times the GDP per capita (PPP). Or see the first chart here [2]. This appears to refute the central point of the articles you linked.
You have to dive into the weeds at least a little to understand something! The US healthcare system does not have worse outcomes for more money. But here's a tldr for you
The US only has worse outcomes because of higher obesity and violent crime, which have no relationship to the healthcare system.
The US only appears to be spending more money because it is richer than everyone else by far. When it was as rich as these other countries, it was spending approximately what they spend today.
Alright, the best deep dive I'm aware of is Kenneth Arrow's work on health-care market failures (1). You can read the seminal paper here. He lays out the case that market forces don't work correctly in healthcare because of the profound ignorance of the patient, the insurance, and even health care staff. Basically, the patient doesn't know what he needs or what it costs, the doctor doesn't know what it costs, and the insurer doesn't know what is needed, so of course everyone fails.
Having quickly scanned it though, I think it's pretty unreadable for anyone without an econ background. Maybe there's a good summary somewhere.
I think the big problem is that the person deciding which medicine to use, the person paying for the medicine, and the person using the medicine are three different people. Even if everyone knew what it costs, it wouldn't matter because the only one that cares is the insurance company, and they don't get a choice. If they don't buy what the doctor prescribed, they'll get sued.
Another problem not yet mentioned: everyone makes too much money. Doctors are making (say) $800K when $200K would be sufficient. This is a problem which can be fixed independently of all the others, just increase doctor supply.
One big distinction I'm not seeing in the other comments is the difference between improving *healthcare delivery* and improving *healthcare financing*. Often optimizing financing puts pressure on how good of care you can actually deliver. And optimizing care often costs more and puts pressure on financing.
The US has (generally) fallen into optimizing mostly for the top-end delivery--the upper bound of US healthcare is second-to-none IMO. And even some of the mid-tier is pretty darn good. But that comes at a cost. Our financing system is baroque and obnoxious at all levels. A large part of that is historically contingent rather than intrinsic, but *history matters*. You can't get the same results as someone else just by copying their programs at T = now. It's all too entangled. Plus demographic effects.
So yeah. What are you trying to fix? What are you willing to sacrifice to get there (including political viability)?
Residencies are part of the problem, we don't have enough doctors and the residency system creates a specific national limit on the number of doctors we are allowed to produce each year. And the system is run by people who benefit from doctors being scarce and thus in high demand/very well paid.
Not a deep dive by any means, but the problems seem fairly obvious to me. Market discipline drives down prices. Patients are restricted by their healthcare plan when choosing a doctor. There's no price transparency, and hence, no ability to shop around and choose a provider based on price. Some patients are directly exposed to the cost of the service, and some aren't. Some don't know until after they've received the service whether they will be.
In terms of policy, I think the following three reforms would get us 90% of the way toward a sane healthcare system:
* Insurance plans may not discriminate between providers: every network should include every provider
* Each provider must publish a publicly available fee schedule, which states how much they charge for any service that they offer, at a level of granularity that patients can reasonably be expected to understand. Providers may charge whatever prices they want, but they may not deviate from the prices that they've published in their fee schedule.
* The government should operate a publicly accessible provider database, including up-to-date fee schedules, to facilitate discoverability and competition.
Doctors literally can’t do any of that unless they don’t take insurance. If you come to me and ask for treatment, I can’t tell you how much your insurance will pay, it depends on whether I am in network at this specific moment, whether the insurance company covers the specific thing I do at 100%, 50% or not at all, if the insurance company has negotiated a special deal with my boss to get charged less, etc.
Cigna just announced that it is going to be using AI to automatically “downcode” bills for complex visits, this might save you money or it might result in your Doctor No longer seeing Cigna patients. It’s chaos.
Doctors can literally say "I don't know how much your insurance will pay and I don't care, *I* will provide this service if and only if someone pays me eight hundred dollars. If your insurance company will cover eight hundred dollars, great, if it will cover six hundred dollars and you will kick in two hundred, also great, if it's just the six hundred, no deal, and if your insurance would have covered a thousand then I'll have to keep that in mind next time I revise my fee schedule.
Jesse's proposed solution is an inefficient kludge, a blunt instrument where we would prefer a scalpel, and offensive to my libertarian sensibilities. But it isn't literally unworkable or literally incompatible with insurance coverage. And the system we've got now is *also* an inefficient kludge that offends my libertarian sensibilities.
How does price discrimination work for emergencies? A scenario lots of people bring up is having some emergency then having to pay exorbitant costs out of pocket. In an emergency, you won't get to choose providers.
As you pointed out, shopping around in a competitive market isn't really possible for emergency care. Hence, I think socialized emergency rooms are probably the least bad option: inefficient but predictable, and ostensibly operated in the public's interest.
I know it's kind of a taboo in US culture, but EU-style single-payer universal health care works great. Everyone complains about it because it's bureaucratic and not infinitely funded, but it's a huge deal to know that, whatever your life situation, and when it comes to serious health issues, the state has your back, without a bill at all. It sure beats tying your health insurance to your employer!
What is ‘EU style’ single payer? Different European countries all have different systems, many of which are not single payer. (Eg Germany, which is the central example of an EU country, is not single payer). Meanwhile the most common example of single payer, England, is not in the EU.
"EU style" single payer is a program largely characterized by cost savings brought about by a fairly monolithic genetic lineage mostly concentrated in a few large metropolises, rather than a melting pot of several genetic lineages spread over millions of square miles, and a defense budget paid for by a friendly ally.
This is the big thing people in the US overlook. Sure, single-payer healthcare in other countries is only *little8 better than healthcare in the US on some metrics, and still has its annoyances and tragedies. But that's what happens when they spend half to three quarters as much on healthcare as we do. Double those system's budgets to match what we spend, and many things would get a lot better.
You have to be specific about which country you're talking about. Do you mean UK's NHS? Because while it is beloved by the UK, it is in serious trouble in recent years, and American's wouldn't put up with the loss of service and waiting lists that entail.
I live in Spain, and have lived in France before, so mainly these. Here in Spain many people who can afford it take private insurance too, for ~100€/mo you get access to private providers with easy access to specialists (in the public system you need to go through your GP first, and then possibly wait months), and a nicer room in case of hospitalization. It's a good deal, the public and private systems keep each other somewhat in check, private insurance can't go too expensive because the public system is decent enough, and the public system gets some slack because a good chunk of the population don't use it for their common needs. The word on the street is still that, for really serious stuff, the public system is better.
That's why it's important to clarify 'single payer' vs. 'single provider'. The latter (like the NHS) are a whole different ball of wax. A single payer system (like France or Sweden's) is a much easier transition.
There's an immediate, trivial-to-implement fix for drug prices: allow the import of drugs from anywhere in the world. That should equalize prices (to within shipping costs).
"We've Got You Covered: Rebooting American Health Care" sounds like what you're looking for.
Liran Einav and Amy Finkelstein are among the top economists who have researched the US healthcare system. I haven't read the book but knowing these writers, I feel like this is at the very least a place to start. And these are economists, so very far from a simplistic narrative that ignores incentives and markets (but also not a knee-jerk "just leave everything to markets" perspective).
I wouldn't call it a deep dive, but I did sketch how a libertarian might fix the US healthcare system, a few years ago. It might serve as a starting point.
I wonder what people worried that AI is going to kill us all in a few years from now think about recent Trump’s push to curtail visas for tech workers?
Like, a disruption to a Silicon Valley innovation ecosystem building doomsday machines seems actively good from their perspective?
The minus is that deconcentrating AI makes government restrictions a lot harder - right now the US or California government can pretty much unilaterally hit a pause button (maybe with a deal with China). If the US loses the tech advantage and AI research gets scattered between a dozen countries it does slow down a bit, but it also becomes much less manageable.
A couple of friends of mine work in the AI space and from what I understand it's pretty common that a lot of the best AI talent moves to or works for silicon valley because that's where the big AI players are.
But I could imagine that if California decided to stop AI research tomorrow somebody would set up a new hot AI startup in, say, Amsterdam. Since people have already demonstrated they're willing to move from their home country to California, presumably a good bunch of them would be willing to move to Amsterday, too. And they'd take their experience, knowledge and expertise with them.
So even if the US banned AI research, would that actually work?
For what it's worth, the industry is whining how it is impossible to do software startups in the EU because of excessive regulation. They are probably exaggerating, but it is not like relocating from California to Netherlands would be easy.
They could probably at least easily get other countries to agree to a pause deal. And the US has enough of the AI infrastructure that it'd be hard to go against it.
This sounds like a generalized argument along the lines "we can't ever use our power, because then we would lose it."? If it can't ever be used, why keep it in the first place? Like, after "unilaterally hitting a pause button", researchers might also scatter across dozen different countries?
It's a lot easier to keep an arms control treaty going once you've made it. The idea is that now, if the US suggests reasonable limits on AI, it's a far enough lead that it's relatively easy to get an international treaty to go along with it. That's harder to do the more active players who think they can get an AI advantage there are.
I mean, you can just as easily argue that until one country has a clear advantage, it has no incentive to want to stop developing, but when no one has an advantage, everyone has an incentive to make a deal. Treaties limiting nuclear race were concluded only after the USSR mostly caught up to the US, if I recall correctly.
But I think both these possibilities are speculations of secondary importance compared to the direct effect of just damaging capabilities of leading labs.
How self aware do you think the 95%ile most aware human beings are? Let's use a scale where '100% self aware' would be at each moment you recognize the total set of drives and instincts that are active within your brain, and '0%' means you have zero internal awareness - not even of how you feel - and you are just acting.
In other words, how ignorant are the vast majority of people of their own drives, instincts, and motives, and how much does this matter?
I think you're misunderstanding the way human consciousness functions. For instance, our qualia continually feed us impressions (even when we're sleeping), but we selectively filter (either voluntarily or involuntarily) the impressions we receive. For instance, you may be watching a bird fly across your visual field, but while doing so, you won't be paying attention to the sensation of the clothes on your body (if you're wearing them) or the smells from the surrounding environment. Therefore, our "awareness," at least of the external world, is constantly being filtered and shifted. If we claim there are five senses (and I'm not 100% sure they're limited to five), then we're only 20% aware at any given moment.
OTOH, recent studies show that our sequential thinking and task management works at a piddling ~10 bits per second. Yes, it's that slow. I don't think this whole story, though, since we can recognize a familiar face in a crowd in under 300 milliseconds. And we can recognize a face outside a crowd in under 150 milliseconds. If we attempt to convert human information processing into bits (and I don't necessarily think this is a particularly accurate model), we're processing multigigabit images in 10^ (-2) millisecond windows.
Moreover, we're processing those images without conscious awareness of the mechanism.
Assuming you're a proficient reader, can you look at the text in this comment and *not* understand it? I can't. I read something, and unless I have to puzzle over a word, I immediately grasp the meaning without any conscious effort. This all gets done without any will on my part. And I can't turn it off.
As for self-awareness, it seems to be a type of qualia, because it segues in between the qualic inputs. Through meditation, you can train your mind to disregard external inputs and maintain a focus on the sense of awareness. Although your awareness dominates your experience during those sessions, your external inputs are either ignored or muted. So at 100% self-awareness, you've got little awareness of anything else because you're shutting out a lot of other shit.
> I read something, and unless I have to puzzle over a word, I immediately grasp the meaning without any conscious effort.
Not only that but you are, if it’s a novel, converting the text into different voices and visual images which, for me at least, are cinematic in scope. That seems like a lot of processing power. Were we to get an AI to produce images as fast as a human could read a page of text it would be impressive.
Yet, I've noticed that internal images are more abstracted than the images that I perceive through my senses. But this may be a peculiarity of how my consciousness works. I spent some time studying under a Nyingma instructor. Their meditative exercises all involved visualizations—of mandalas or meditation deities. I discontinued the practice because I found myself getting frustrated that I couldn't create the image in my mind.
However, in other states of consciousness that weren't conducive to meditation, I noticed that I could visualize things in detail. I've observed that while I'm in the hypnogogic state before sleep, I'm able to direct my visualizations, allowing me to construct the facial features of friends, imaginary people, animals, or objects. Unfortunately, the hypnogogic state is transitory. Also, I can visualize freely while on psychedelics like LSD. But I can't seem to meditate while tripping. ;-)
I imagine that there is some way to measure activity level in the prefrontal cortex during deliberative thinking, vs activity and energy levels in the rest of the brain.
> As for self-awareness, it seems to be a type of qualia, because it segues in between the qualic inputs.
I find this claim very remarkable and interesting. There's a whole sequential model in there that hints at a full theory behind what you're saying. Could you elaborate a bit more?
After spending years observing my mind, I noticed that, while it's absorbed in a strong sensory experience, my consciousness can't pay attention to my selfhood (for the duration of that experience). As my attention to the sensory input fades, my self-identity reasserts itself, and I decide what to do or observe next—or a new sensory input may capture/override my attention.
From that, I derived the idea that the sense of self-identity is an internal "feeling" that is functionally equivalent to feelings we get from external sources. Moreover, there seems to be a "Qualia Manager" that cycles between our external qualia (the five senses) and our internal qualia, which include our sense of self, and the feelings derived from the functioning of our autonomic systems (breathing, digestion, sexual arousal, body posture, balance, etc.). Our self-identity inserts itself in the "time slots" between other feelings. We can't (at least I can't) focus my attention on two things at once. My Qualia Manager appears to employ a weighting system that prioritizes different sensory inputs at any given time. We can impose our will and override our Qualia Manager, and either focus on our self-identity, or focus on sequential tasks (problem solving, speech, writing)—but without training it's hard to override our Qualia Manager for any length of time. Eventually, we get distracted by our sensory inputs, and the Qualia Manager reverts to automatic functioning.
This idea is implicit in Buddhist meditative praxis and their concept of aggregate processes (skandhas). By letting their consciousness follow the breath, i.e., by focusing on the sensation of inhaling and exhaling, the meditator attempts to train their consciousness not to be distracted by inputs from external qualia and internal qualia—other than breathing, and among the distractive internal qualia is the continually intrusive sense of self.
I don't have a metric, less any research to share, but at a first approximation my intuition is that the spectrum of individual differences in self-awareness is narrow compared to the scale you are using. I think there are hard constraints on the ability of a system to comprehensively understand/model itself.
Plus, if by "self-aware" you mean consciously aware of the factors influencing cognitive behavior, that's an even narrower range. My guess is that the most self-aware humans are well below 50%.
None of that means we can't become more self-aware than our individual default state, or that there aren't benefits that come from achieving that.
This was my intuition as well. I would think if you're _really_ good at it, you end up with less mental noise. But if someone came along and asked "why are you sitting there in meditation", "becuase it feels good" would catch maybe 20% of what was really going on under the hood.
Good question I'm not suited to answer. The first trilogy was great, and the second trilogy I read the first book, and loved it, but the rest were not out when I was in my fiction phase. Not sure how many are out now.
I hope that if he did not finish it in his mind, he does. The first trilogy was so emotionally visceral; I'd/I've never read anything like it before/since.
Both 'trilogies' are great (second one is actually 4 books), and it does technically reach a conclusion of sorts, but Bakker said truly finishing requires two more books, in a last arc. I really recommend going back and finishing.
Does anyone know of papers investigating the wisdom of crowds effect on a single LLM? That is, if you retry the same prompt 10 times and take the mean or median answer, does that improve the accuracy over single-shot?
Am so grateful for Trump's push for peace in the middle east. For the first time you have competent negotiators recognizing which players you need involved and who has leverage and who doesn't and what demands are achievable.
Hopefully within 10 years you have regional integration and two peaceful two states for two peoples each wishing for all of it but only culturally not militarily.
Until his peace plan actually gets accepted by Hamas and until this international peacekeeping coalition he's imagining actually puts boots on the ground, I'm not going to heap any praise on him.
The hard part of Israel/Palestine has never been coming up ideas, the hard part has been getting people to agree to them. (And keep agreeing even when some asshole on the other side breaches the agreement.)
You minimize Obama's middle east "achievements". There was also that small episode with like half of the region decent into some combibation of civil wars, becoming Iranian proxy, becoming Russian base, or being taken over by ISIS. And it's indirect effects on the internal cohesion of the EU
He’s facilitating genocide and ethnic cleansing in Gaza as the world watches in horror. Can’t really win a peace prize with that black mark on your record.
A day after Modi said this, Trump imposed 50% tariffs on India.
Israel-Hamas:
Last week Israel bombed a meeting of Hamas leaders in Qatar. The leaders survived. They were there to discuss possible peace deal responses. Now that's off the table. Trump whined that Netanyahu didn't warn him. Netanyahu stated he called Trump an hour before he launched the attack.
And Trump has finally stated that he doesn't think Putin wants a deal and that Putin might lose. So much for a peace deal for Ukraine and Russia.
Meanwhile, Saudi Arabia and Pakistan have entered into a NATO-like military treaty, and there's a realignment of political relations in the Middle East and South Asia that is reducing US influence in the region.
Obama's Nobel Peace Prize was ridiculous and helped to discredit the award, though not at all Obama's fault. Nominations for that year's award closed only 11 days after he took office, and the committee's deliberations were that summer. The five members of the deciding committee (all Norwegian politicians appointed by the Norwegian parliament) gave it to him basically just for existing, adding that they hoped the award might influence the new president's _future_ foreign policy choices.
Obama at that point hadn't yet done anything in particular in foreign policy nor did he or anyone else claim that he had; he'd made a couple good speeches describing some topics that he _planned_ to work on. None of those topics were unusual or particularly different from what most rookie POTUSes had said for decades. He just said 'em real nice as was his gift.
I hoped at the time that he'd politely decline the award, graciously saying something like "hopefully some progress towards world peace will one day make me a candidate for this great honor". By all accounts he and his advisors were nonplussed by that Nobel committee's weird announcement and they decided to just accept it and move on. I still think that was a penny-wise/pound-foolish call, that he'd have gained much more politically from "gosh thanks so much for thinking of me but I really can't accept right now".
>I hoped at the time that he'd politely decline the award, graciously saying something like "hopefully some progress towards world peace will one day make me a candidate for this great honor".
He accepted the award, of course, but in his speech he did acknowledge the awkward decision:
"And yet I would be remiss if I did not acknowledge the considerable controversy that your generous decision has generated. (Laughter.) In part, this is because I am at the beginning, and not the end, of my labors on the world stage. Compared to some of the giants of history who've received this prize -- Schweitzer and King; Marshall and Mandela -- my accomplishments are slight. "
Yea. The whole episode, while trivial in the grand scheme, kind of encapsulated Obama’s core flaw as a POTUS: his assumption that the “bully pulpit” role meant that having delivered a great speech always represented meaningful action.
Regarding the Peace Prize….over in Earth 6,473,928 they tried to give me a Nobel Peace Prize (long story) and my response was two sentences: “No thank you. It is not among my life goals to join a list that includes the likes of Henry Kissinger and Yasser Arafat.”
In 2020, the prize went to the World Food Programme. It isn't going to a guy who has eviscerated foreign aid. Nor to the guy who bombed Iran nuclear program sites. Nor to a guy who claims and exercises the power to kill suspected drug traffickers on the high seas. Not to mention the "Department of War" stuff.
And, btw, what role did Trump personally play in supposedly resolving any of those conflicts? The 1973 prize went to Kissinger, not Nixon.
> And, btw, what role did Trump personally play in supposedly resolving any of those conflicts?
No one really knows, but my best guess is that after all Armenians were expelled from Nagorno-Kabakh, he said: "Finally, now the peace can begin! Too bad we can't do the same with Ukrainians."
Rob Malley's book 'tomorrow is yesterday' excellent dissertation on why the two states was never an organic solution given the aspirations of the people involved.
But at some point enough war and 3rd party interests might just wash away all that
Duck sex? I read about raising ducks and someone said that all duck sex is rape! The female ducks have somehow evolved to make it very difficult for the male to fertilize an egg.
I can't understand why evolution allows this. Seems like the first evolutionary glitch that started it would have reduced the probability of mating and would not have been passed down. Instead, it seems like the entire species now has females that have adaptations to make mating difficult. The article says it gives the female some control, so I get why the female would be happy with that evolution, but how what evolutionary advantage does it have to where it continues to be not just passed down, but got worse over time?
I don't have an opinion on the evolutionary question, but as someone who had several pet ducks as a kid I can concur with that article: ducks are not gentle lovers, and I've never them mate without the female forcing a high speed chase first.
What do you mean? This actually makes more sense than most adapatations. It's literally just a sexual arms race. Male ducks will naturally try to rape female ducks. Not having any sexual selection is bad in the long term, so populations where females can prevent unwanted insemination outcompeted the others. However, this also caused males that were born with abilities to bypass the protection to be slightly favored over time, which again forces selection for females with better protection. Hence the corkscrew penises.
Is this dumb? Yes, but it works perfectly well, so who gives a damn?
An interesting caveat to mallard reproduction is that sometimes females will fly over groups of bachelor males in order to incite the 'rape flight'.
Presumably the flight itself acts as a test for would be mates, and the drake that proves himself to have sufficient endurance to catch her must have good genes.
It's just sexual selection. It is a major drive of speciation in many groups. The particular type of sexual selection here, besides compatibility of sexual organs, is called cryptic female choice. It refers to the female's ability to control fertilization success without the male's awareness of this ability. The concept was originally developed by William Eberhard by studying spider reproduction. In the case of ducks, the complex oviduct of the female has many dead-ends where sperm can become trapped, making it difficult for an unwanted male's sperms to reach the eggs.
I have read that a mare, shortly after becoming pregnant, will attempt to have sex with every stallion in her herd, and if prevented from doing so she will abort the foal. (Because a stallion will kill any foal whose mother didn't have sex with him.)
Do you know if mares will also do this if they just don't approve of the foal's father?
My guess is: Since the female duck adaptations are basically defense mechanisms (only?) against unwanted fertilization, the male's aggressiveness in sec probably evolved first. For the males, this seems advantageous since they don't depend on the females decision. Meaning also that the more aggressive the male (and their style of sex), the better their reproduction chances. But females that can choose a better partner rather than the first one who chose them should have an advantage too. So for them, defensive mechanisms are advantageous.
As long as the defensive adaptations don't completely hinder reproduction, there isn't really an issue here: If a female duck has sex with say 10 male ducks over a couple of days, it's okay evolution-wise that she got fertilized by the 10th duck rather than the 1st.
I'm thinking of the first duck to evolve this, duck zero, I would expect that duck to have fewer offspring, and (I don't know, but) not pass on that trait to every one of the offspring. I would have expected that those ducks also would have fewer offspring and eventually breed itself out, rather than the males also adapting. Just seems weird.
Well they are also inheriting more aggressive male genes passed onto their male offspring, along with the females inheriting the more defensive genes. If the genetics of female defensive sexuality is passed through the male line through to the subsequent generation, you can see why this genetic pair would start to dominate.
This is a “just so” story, of course but a lot of speculation on evolutionary paths is.
If you wish to learn a lot more about this, I recommend reading Dawkin's The Selfish Gene in whole. It's basically about how thinking of evolution as working on species or organisms is a wrong framework, and how the unit of evolution is a gene, and this is how you can get much higher predictive value from that frame.
It's a tradeoff between quantity and quality of offspring. Fewer offspring that are fitter and thus themselves more reproductively successful can be better in the long run. Especially if it's not actually that much fewer, since the female is much more likely to be limited on energy and nutrients than on reproductive opportunities.
Such a female duck zero would still be able to select whom to mate with, hence increasing the reproductive success compared to the general population. As long as she does not overshoot and is being too defensive, but the duck zero was probably just a bit defensive (since the mates were not that aggressive).
Why would you expect this to be disallowed? To me it seems largely equivalent to any other type of female choosiness, which generally reduces the probability of mating but increases the expected fitness of the offspring.
If you have ever encountered a female cat in heat, you might question the "rape" descriptor. "Desirable in the abstract, unpleasant in the moment" also describes a lot of fully consensual human sexual contact.
He should have said many. It seems unlikely that female cats are having a good time but bonobos are a different story.
In species where long-term pair bonds matter — swans, wolves, some primates, even prairie voles — sex isn’t just about passing on the genes. It’s about cementing the relationship
In contrast, cats don’t really form pair bonds. The tom comes in, does his barbed business, and off he goes, cock a hoop. The female isn’t too obviously distressed but not really in post coital bliss either.
Being unpleasurable doesn't make it nonconsensual, though. In most species that mate at all -- though I'm not sure about the exact numbers -- successful mating requires signals of about the same degree of deliberateness from both parts. Showy parades and ornaments like those of peacocks and grouse are negatively correlated with pair bonding -- the more monogamous a species is, the less sexually dimorphic it is -- but mate choice by females is the whole point of the exhibition!
ETA: Even "unpleasurable" is not a necessary implication of lack of pair-bonding. The mating-induced ovulation found in several mammal lineages may be homologous to the primate female orgasm, and share many of the same mechanisms: https://sci-hub.st/10.1002/jez.b.22690
Bestiality laws are not based on whether or not a female animal is able to experience pleasure. Women sometimes orgasm while getting raped. That does not translate to some acts of rape being ok. We have bestiality laws because animals cannot give consent, because most animals are not moral agents. They are, however, moral subjects (sometimes referred to as "moral patients").
I don't think Discover Magazine should be your authoritative source for this question. Likewise, using bestiality laws to support your assertion seems a bit tenuous.
Be that as it may, porpoises have been observed engaging in sexual activity outside of reproduction, and observations of females show physical responses (including clitoral stimulation) consistent with orgasm.
Lab studies of female rats show that they exhibit rhythmic contractions and neurochemical responses synchronized to paced copulation, which researchers interpret as an orgasm-like state.
Popular history programming has gotten a bit stale. There's program after program on the vikings, the Romans, the Egyptians, and a few more. It's time for a refresh.
Your mission, should you choose to accept it, is to select parts of popular history that have been overdone, and suggest replacements.
For my part, I'm pressing pause on Rome, and substituting the Hellenistic period, from Alexander the Great to the rise of Rome. There's a good three hundred and fifty years of history there, and the geography stretches from the Mediterranean well into central Asia. And we get to talk about why the New Testament is written in Greek, not Latin or Hebrew.
I'd replace WWII with the Concert of Europe era (1814-1914) in general, and the Great Game in particular. It's got intrigues and clever plots, revolutions, major technological advances, and a few wars sprinkled here and there. Plus, everyone has absolutely dashing costumes.
I'll echo a few others in bringing in the Ottomans, and I'll also suggest replacing the Egyptians with the Aztecs. Maybe replace Vikings with the contemporary Germanic tribes? They're similar culturally, so maybe a bit of a cop-out, but we don't hear a lot about them.
I agree on this, the 19th century is sadly underrated. It's maybe the most unequal of eras, one where rifle and artillery faced spears, where Sweden, Portugal or Belgium were bullying China, when science took off on an exponential course, art moved faster than ever, and politics were hecking wacky.
This reminds me of that meme about the ceremony where young men get assigned an empire to obsess about. I think the Ottoman and Persian empires were really interesting and mainly are treated as the antagonists in old school history books.
There is a desperate need for the global public to understand Chinese culture better. Engaging with them is only going to become more important and the stakes are only going to rise. Mass media treatments of Chinese history would be a good place to start.
Sorry, can’t think of any video histories of China but if you don’t mind the written word, David Roman’s Substack “A History of Mankind” has done a good job covering Chinese history up through the Han Dynasty, Roman history up through Christianity, as well as Hellenistic history.
"Blue" on the Overly Sarcastic Productions youtube channel has a practice of focusing on the time period just before consequential events (the decades before WWI, etc.) that I appreciate. Do you think that the century or so just before the Tang would be a rewarding study?
Do you have a goal in mind? Of the books on Chinese history that I've read, my favorite was one about the Qin and Han dynasties, but that was mostly for reasons of personal interest. If you want to understand Chinese culture today, your best bet is recent history; the Tang dynasty ended more than a thousand years ago. It's relevant, but a lot of the ways in which it's relevant will be captured by studying more recent periods.
Note also that the Tang dynasty is 300 years long. The idea of "let's look at what happened before consequential events" doesn't really apply to dynasties, which are periods and not events. There are many consequential events within most of them. If you want to study the century before the An Lushan rebellion, you'll be getting most of the first half of the Tang dynasty.
I do, in fact, want more people to be more conversant with Chinese culture today, although a focus on politics can't hurt. So perhaps a more recent focus.
That said, however - I feel rather strongly that the best way to understand the Roman Empire is to study the Roman Republic. Ditto with just about everywhere, everywhen. The best way to understand North American colonization is to look at British history leading up to it, esp. the religious oppression. The best way to understand the US today is to go back and look at the entire 20th century, esp. the progression of presidential power.
Really, the best way to understand the world today would be to go back to the beginning about 200,000 years ago and study everything, but that seems impractical.
> That said, however - I feel rather strongly that the best way to understand the Roman Empire is to study the Roman Republic. Ditto with just about everywhere, everywhen.
You are wrong about that. If you want to understand what something is like at time X, it's always more effective to study it at time X. There is value to be gained from studying an earlier time and trying to understand how it got that way, but much less value.
If you want to learn how to speak English, you could start with Danish. But starting with English will always be better.
If you want to be immersed the glorious revolution, early enlightenment, scientific revolution, financial revolution, Louis XIV reign and other very important historical happenings in those times I highly reccomend the baroque cycle.
Besides making you feel like you’re inhabiting that world it’s an incredible set of novels. With many satisfying payoffs using history like a Rube Goldberg machine.
Seconded, I'm actually on the verge of re-reading those (it's been a decade).
And having read a good amount of actual history of that period, one thing I liked was that Stephenson didn't play fast and loose with it. (He inserted his own characters in and had them do interesting things.)
If you think you might be interested in Eastern/Asian history and are willing to read rather than watch videos I'll point you towards AsiaPac Books. They have comics covering events such as the Romance of the Three Kingdoms and Cheng Ho's explorations.
[Except that this specific item is sold out right now]
As everyone here knows I am reluctant to push my own podcast. But you really should try this excellent episode on the Opium War. And lots of other subjects, Roman and non Roman!
Hmm Dan Carlin is doing a series on Alexander the Great. (Though new podcasts are coming out at his typical glacial pace.) I hear that Ken Burns is doing a documentary on the American Revolution. I'm looking forward to that... due out in November
One of the things I find interesting about Carlin is that he often makes podcasts about periods and places I don't normally see, such as the Visigoths or the Munster rebellion of 1534. Untrammeled paths aren't his focus, it's more about times of very intense passion or violence, and he lets that carry him to relatively unexplored times and locations.
In the uk, history programming has become 90% ww2. As interesting as it is, I would love some Roman, Viking or Egyptians. the colonial era is the most overlooked as it makes people uncomfortable.
I'd drop the fall of the Roman Empire and replace it with the fall of the Tang Dynasty in China. It's a fascinating tale that is big in modern Chinese culture but is almost unknown in the West. A good jumping off point is the Battle of Talas, one of the only times a Chinese army fought a Muslim army directly, 20 years after the Battle of Tours.
Yes, I think it's very much a deadend. (As far as ASI is concerned.) (Though I think Peter Thiel is also right, in that current LLM's are smart *enough* to have noteworthy economic impact.)
Like I've been saying on here for ages, I don't expect an AI to be an ASI unless it can drop into a subspace where it can do logical and causal inference (not just draw correlations), in order to do reasoning that's actually original and insightful. I.e. logical and causal inference are what allows you to do engineering and science (e.g. put a man on the moon for the first time), as opposed to what Eliezer called "guessing the teacher's password".
Below, Alex Scorer writes
> scaling LLMs alone gets very high narrow intelligence
And I see where he's coming from. But from *my* perspective, the issue is that merely drawing correlations casts the net too *wide*. Which is why Chain of Thought goes off the rails at a certain complexity threshold. It's like recursively feeding an image to a fax machine. It's not a lossless process, so eventually the image drifts into noise.
Training Gen-1-LLMs on human-generated text is a dead end. But I think we can train Gen-2-LLMs on carefully curated transcripts of Gen-1-LLM conversations. Then rinse and repeat.
At least that is the course I hope we will take. Another, more-dangerous path forward is the one Sutton seems to be advocating. Shifting from chatbots to embedded agents with lifelong learning. But there may well be many other paths to ASI.
If we don't all die, it won't be because we ran into a dead end and stopped making progress..
Because we simply back up and try, try again. My (ideological) belief is that ASI is technically possible. The only questions are how and when. And, for ASI *agents*, whether we decide to do it..
I wrote a post recently that included an experiment to try to show LLMs have advanced a lot in terms of shallow thinking, but not deep thinking. If true, the gains we'll get from continuing to scale up LLMs will not give us novel insights like curing cancer or whatever.
Your experiment seemed to find that LLMs have advanced a lot in deep thinking! GPT 5 did so much better than the other models!
And also, my sense is that using the online interface of Claude or ChatGPT (especially Pro-tier) reliably does exactly the kind of deep-thinking needed to solve these problems!
Better initial models are getting better at "shallow" or "System 1" thinking, and we are getting much better inference-heavy architectures like in the online interface or Codex/Claude Code to progress in "deep" or "System 2" thinking. The only question is whether this combination will continue to deliver marginal improvements until we get to escape velocity, or whether it will hit some ceiling or bottleneck.
My vibes-based intuition is that scaling LLMs alone gets very high narrow intelligence - as we've seen already - but doesn't hit AGI. There's a persistent underlying stupidity to LLMs which hasn't improved to anywhere near the extent of the things they're good at, this makes me doubtful scaling can fix, and that some supporting paradigms need including to finish the job.
I think we need more adversarial/collaborative architectures. Like multiple LLM's engaged in a task.
One generates a task drive and drives execution to completion. Each step of the task graph has generators and validators. The generators try to propose solutions, the validators try to poke holes in them. That generation/validation process goes on until the validators find tinier and tinier holes, or until that process converges, and then a third process looks at their output and says, "ok, this is good enough."
LLMs illustrate that the fundamental nature of intelligence is interconnected neuron density. It's likely that, similar to how brains have had to evolve various structures and scaffold such as the amygdala, hippocampus, and various cortexes and neocortexes, there will need to be similar scaffolding for LLMs in the future.
Hmmm. Do you think intelligence will simply emerge when neurons reach a critical level of density and interconnection? This seems like magical thinking to me.
The correlation between neuron density and intelligence has been settled science for centuries at this point. It’s undisputed that the more of one, the more of the other.
As for discontinuous emergent capabilities, there’s some great recent research supporting the hypothesis:
> The correlation between neuron density and intelligence has been settled science for centuries at this point.
Nope. That may have been scientific dogma four decades ago, but our current understanding of comparative neuroscience tells a much more complex story.
For instance, songbirds have significantly higher neuron densities than primates. But their neural design seems to be optimized for song production and processing. The parrot family and the crow family also have dense neural arrangements (but not as dense as songbirds), but they display problem-solving, tool-using, and complex social behaviors—without having the pre-frontal cortex that neuroscientists claim handles the higher-level cognitive functions in humans.
IQ-fetishists like Crémieux like to claim a strong association between brain volume and intelligence, but even though, on average, men have larger brains than women (due to body size differences), there is no consistent difference in mean IQ between sexes.
Moreover, Cetaceans have much larger brains with more gray matter neurons than humans. Still, they don't seem to display the same level of intelligence as humans (although IIRC, they also lack a clearly defined pre-frontal cortex).
So, generalizations about neuron density, brain size, and intelligence don't hold. There was a lot of hand-waving a while back about Intelligence being dependent on synaptic plasticity and pruning. The New Scientist would publish breathless articles about the latest theories, but I stopped paying attention because there didn't seem to be much there there.
Interesting! I mostly thought it was a cool coincidence that someone below was asking specifically about people who blog/text with zero capitalization, and then you came along. So I figured I'd ask.
I appreciate the invitation, but I do most of my sleeping at night and I don't love the idea of giving that up. Also, I don't desire much to be on a podcast.
"Perhaps what's emerging isn't about control at all, but about connection - a vast web of Atman-to-Atman bridges that make tyranny obsolete because every node can recognize every other node as kin."
Hello. Let me introduce you to human history. Civil wars, for example. That sure as heck is "every node can recognise every other node as kin". And yet.
So why think that some fancy LLM is going to change all that?
"We've made LLMs absolutely useless for tyranny."
Brother, we have just strapped the saddles on our backs for the tyrants to ride us.
'Member way back when, the Internet was going to do all this hodge-podge of poorly digested Buddhism and Hinduism suggests? 'Member how "information wants to be free" and we were all gonna make connections and beat our swords into ploughshares and no more bad things because the Information Superhighway was going to connect us all globally and we'd recognise our common humanity and destiny to be divinities and hold hands and a thousand flowers would bloom?
'Member that? And how did it end up? "The Internet is for porn".
Who knows what the future will hold? I try to be neither a techno-optimist or a technopessimist. New technologies are so hard to predict that they could transform the world in ways that were impossible to foresee. Maybe LLMs will save humanity! WHO KNOWS.
These studies were not done in a country that has child pornography as it's largest economic output -- in that child pornography pretty much requires rape (was it not clear that I was referencing child pornography above?). Pretty damn sure that "non-child" pornography is legal there, too. (the child pornography isn't primarily for local consumption).
I'm all for legalizing "of-age" pornography (and if you put that age at "somewhere above 13" we have room to discuss "cultural values").
I would wager said studies are also not done in countries where majorities of both women and men believe that "if you want sex and the other person doesn't, you should have sex." (Yes, you can say this is a "sex positive" religious thing.)
Do you speak English as a second language? I do not ask this to be derogatory. Because it is often difficult for me to understand your comments. Not impossible, but difficult.
The last two paragraphs of the comments I replied to were difficult. "as curated by the extremely autistic," is hard to process.. WHICH extremely autistic? How did these extremely autistic achieve such profound influence?
"Test-data for categorization" is a bit of a stumper.
But the most perplexing one of all is "I'm vomit." I'm chronically online, but I am 41 too, so maybe this some type of meme or popular culture thing for people much younger than me?
At the start of my masters I sat down and calculated that I needed 12 hours of solid work to do the homework well for each problem sheet, multiplied by the number of problem sheets, made a timetable and then went to the library with nothing but paper, pen and textbooks and did that time.
For the exams I just spent every day in the library from around Easter, and did well in exams.
Just put in the time and put yourself in an environment where you can't be distracted.
Psychologist here. Based on my own experience and what I’ve seen of others, spending a lot of time online has a massive effect on attentional habits. We become much more used to moving on to someplace else when what we aren’t reading is not attention-grabbing enough. And that goes on even when we are not online for entertainment, but in order to learn something we want to master. We bail more quickly on an info source if we realize that extracting the info is going to be a bit difficult. We will have to wade through some irrelevant material to find the good stuff, or look up unfamiliar terms, or take on a paragraph that is full of difficult info, and comb out that tangle until it makes sense. I don’t think our brains are rotted, more in ruts formed in a setting where it works reasonably well to give up quickly on reading things that aren’t quickly satisfying.
My personal experience is that the impact of online attentional habits on book-reading and studying is not a bit subtle. it is *large.* The difference in how much of each I do compared to pre-internet preoccupations is enormous. The difference in how hard it find it to focus on book-reading and studying now is very big. However, my ability to do both is intact, and I can get back to the habits that underlie it by forcing myself to do uninterrupted periods of each. So I recommend you work on the theory that what is wrong is attentional habits.. It’s especially likely that that’s what’s wrong if you did better as an undergrad at studying hard enough on tests to get a result that satisfies you, which in your case is A’s, it sounds like. If as an undergrad you managed to come through with an excellent performance on tests and projects, it is unlikely you have attentional problems that are preventing you from doing better now.
Here are 4 ways to work on changing your attentional habits:
1) Become aware of your present ones online. On a few occasions, go back over your history for the last half hour, and notice any jumps to a new site or new part of a site. Go back to the sites iyou went to and make a few notes on why you left a site or part of a site. Next, have some periods when you are browsing online during which you try to stay very aware of your engagement levels and your urge to jump elsewhere. It’s ok to jump when you feel like it. We are not trying to change your online attentional habits, just make you more self-aware of attentional cravings and what you do when you feel one.
2) Have some periods when you do uninterrupted reading, homework, or other hardcore studying. Make them small to start — maybe 10 mins. You are likely to feel many cravings to stop during that period. Do not give in to the urge to interrupt your studying by looking at something online or getting a snack or changing chairs. Instead, every time you resist a craving to do something other than the task you’ve given yourself, make a tick mark. Let them add up. Notice the proof that you can resist the urges if you have decided in advance to do that. Make the periods of uninterrupted work longer, up to about half an hour. It is very important to *not* to have periods when you are doing some hybrid of studying & browsing — studying, but browsing when you feel an urge to. You are trying to train yourself out of the when-bored-go-somewhere-more-fun-online habit. If you want to browse online, do it, but don’t combine it with studying. And when you study, do not interrupt what you’re doing to go online.
3) Partway through 2), start keeping track of moments when you spontaneously check in with yourself, and notice that doing the studying feels OK or better. Once you have broken the habit of bailing when concentrating is unpleasant, you will have fewer moments when concentrating sucks, because unpleasant concentration has stopped counting as permission to browse online. It’s not being “rewarded.” So every now and then while studying, you’ll notice that “oh, this is not so bad” or “I’m resigned to doing this problem set” or even “actually, this stuff is pretty interesting.” OK, when you notice that, make a tick mark. The point of this is to untrain the idea that life outside of browse mode is highly unpleasant.
4) So once your concentration is better, you still should work on having good study habits — things like planning a date on which you should start working on a certain thing. I recommend that you use some app for assistance with big-picture follow-through. Personally I like Beeminder, and if you are not familiar with it you should check it out.
As a math teacher who sees this pattern often in kids, there are probably 3 things you should ask yourself.
1. How much of the homework is being checked by your teacher or math support? If homework is just being graded for completion, or isn't being diligently checked, you could be doing the work wrong and not having corrective action taken to address issues. Going to a math support office or finding your teacher or a TA outside of class to check your work will help address this problem.
2. When you get a problem wrong in math, how often are you redoing the problem and practicing the skill with a new problem? I see this a lot with both very smart students and ADD/ADHD students. It is very easy when you get a problem wrong to look at the answer key or the teachers work and identify the problem then dismiss that it was only just a silly mistake and it isn't worth addressing. One would say to themselves "oh I didn't flip the greater than sign" or what have you and trick yourself into thinking you can do it right without extra practice. My rule of thumb is for every problem I (or a student gets wrong) they should do another 2 problems just like it to practice. This is becoming much easier to do as you can prompt an LLM to generate problems like another problem.
3. How often are you testing yourself under time pressure? You may have high accuracy, but completing math quickly is a skill on its own that practice without time pressure doesn't develop. Along these lines, understand the technology you can use (calculator, DESMOS, etc) and learn how to use it quickly. If time is an issue, have a teacher show you how to use your tools to increase your speed.
The dirty little secret is that no one is going to care what your graduate GPA was. Far more important is the status of the institution where you are getting your degree, and impressing one or more faculty mentors, doing some research project or projects under their guidance, and using that as leverage into introductions with potential employers in your field. BTW - what is your field?
Statistics will always be in demand. I'm mostly familiar with applications using human behavior data sets, but there are many domains of application, from population forecasting to personality inventories to opinion surveys, and of course the old standby, disease contagion. Pick what interests you and work it.
The periodic table is a taxonomy not a map. It doesn't represent a continuous metric space so applying a geometric transformation to it doesn't make sense. The layout just represents the structure of atomic orbitals.
I do not believe matter is discretized at all. What you describe is an abstraction that allows a further extension of the fundamental lie that anything in reality has a smallest quantum. The smallest measurable quanta are the quanta that fit a model that is predictable. The scientific method's principle of repeatability created a trajectory of human revelation of knowledge that is too easily planned and carried out by non-humans before humanity. It places our adversaries at an advantage. Yet it is still efficient as a caretaker of technical progress as humanity expands our presence. But what if we have allies who wish to give us gifts to leap ahead of this slow and methodical process. What if those gifts are being stolen by the powers that were?
The periodic table *is* already a 2D representation of a 3D shape. The periodicity of the periodic table that led to our current arrangement was first noticed by Alexandre-Émile Béguyer de Chancourtois when he created a "Telluric Screw" listing the elements on a cylinder.
Was he listing the elements inside the cylinder, or just on the surface? The surface of a cylinder differs from an ordinary sheet of paper only in that you can cross from the left edge to the right edge, which is a motion with no meaning to the periodic table.
(A thread around a screw does make more sense than a cylinder; moving from neon to lithium makes no sense, but moving from neon to sodium is in a sense the same thing as moving from fluorine to neon. But it will suffer from the fact that you would actually need a thread around a cone-like shape with strange curvature. The problem is already evident in the model shown at your link, where the next element "like" sodium and potassium is supposed to be manganese. I'd like to see sodium form an ionic bond where it's donating three electrons.)
A map of (the surface of) the world, like the cylindrical surface, is also a 2D representation of a 2D shape. You might identify the two dimensions, for example, as latitude and longitude. It is a shape with curvature, though, so (unlike the cylindrical surface) it's convenient to use three dimensions for many purposes.
The natural shape of the periodic table is fairly straightforward: you have one dimension describing the number of electron shells existing around the atom, and a second dimension describing the number of electrons the outermost shell contains. The extent of the second dimension is constrained by the value of the first; it must range between 1 and 2n². So the shape of the table looks like a two-dimensional graph of the curve f(x) = 2(x-1)² - 1. This shape is already flat and doesn't really benefit from being embedded in a three-dimensional space.
The various elements have some relationships between them, which could naturally be thought of as a sort of graph structure. A graph doesn't necessarily fit any particular dimension, but the higher the dimension you allow, the more freedom there is to avoid distortions of the graph when embedding it into that space. With the periodic table in particular, it naturally has a sort of 2-manifold like structure because of the shells and periodicity thing, so the faithfulness of the embedding kind of caps out at dimensions a little above 2.
The hypothetical 4D architect would presumably be more interested in a whole different set of 4D elements, rather than studying ours.
You're kind of asking the wrong question. The periodic table represents elements and their properties. It looks that way, because elements work that way. You have a discretised set of elements because they have a whole number of protons in the nucleus. you can't have half a proton, so you can't have an element between hydrogen and helium.
You have a repeating structure in the rows of the table because of the way chemical properties arise from the structure of electron orbitals: if the outermost shell is full, the element is unreactive. The next element can react by donating an electron, but as you add electrons,you fill up the shell and eventually get another unreactive element.
I don't know what you want out of this question so I'm not sure if this is helpful, but there are answers to this on the Web already (that might be in the AIs' training sets). At https://www.av8n.com/physics/periodic-table.htm for example.
Well do you know that the periodic table is underpinned by the QM electron wavefunction. The wavefunction is mostly about the spherical harmonics and solutions for different energy and angular momentum states.
Oh well it's been a long time since I took modern physics in college. But to some approximation the atomic orbitals are very similar, to the solution to the the hydrogen atom, where you have one electron and one proton, and that's a two body problem and physics types can solve it. You get these spherical harmonics* which we label as atomic orbitals, and you can find periodic tables with all the atomic orbitals listed... and you can see the symmetry (or correspondence, whatever the right word is.) https://www.chem.fsu.edu/chemlab/chm1045/e_config.html
Electrons fill the 1s, then 2s, then 2p, 3s, 3p and then 4s before 3d... (things getting more complicated.) etc. 1,2,3 ... are the principle quantum numbers, ~energy, and the s,p,d ... are angular momentum states, ang, mom = 0,1,2... And then you get two electrons in each 'state' because the electron has spin 1/2 and there are two spin states per solution. And only one electron per state, cause they're fermions, and obey the Pauli exclusion principle. https://en.wikipedia.org/wiki/Pauli_exclusion_principle. Which is one of the coolest things ever. I mean it's how you can walk on a bridge and not fall through!
Hmm well in second or third year physics, you find the solutions to the hydrogen atom. McGervey, "Intro to Modern Physics was the book I used , but there must be lots on line now. Look for solutions to the hydrogen atom.
Sorry to hear! If you're reading this blog, it's likely the case that you've sufficient intelligence to overcome this. I was hospitalized a number of times in late adolescence and am totally fine as an adult. Feel free to message me if you would like some encouragement.
I asked Scott about this earlier, what was the biggest predictor of people overcoming mental health issues. If I recall, he said it was intelligence and ability to function beforehand. Either way, you will need some source of love in your life, so you feel safe enough to recognize the patterns you’re stuck in.
Yes and yes. We request them to intercede for us (see e.g. the Litany of the Saints https://www.youtube.com/watch?v=8R0E_u6D76M&list=RD8R0E_u6D76M&start_radio=1) and we also hope for their aid, as through the grace and mercy of God they are channels of His divine power to us. It is not by their own power, but by the gifts God has bestowed upon them, that they work miracles.
There's a long article here in an old encyclopaedia, and the language is a little old-fashioned but sound:
"We shall here speak not only of intercession, but also of the invocation of the saints. The one indeed implies the other; we should not call upon the saints for aid unless they could help us. The foundation of both lies in the doctrine of the communion of saints. In the article on this subject it has been shown that the faithful in heaven, on earth, and in purgatory are one mystical body, with Christ for their head. All that is of interest to one part is of interest to the rest, and each helps the rest: we on earth by honouring and invoking the saints and praying for the souls in purgatory, and the saints in heaven by interceding for us."
This includes a quote from the famously prickly St Jerome, which I have to share as both an example of the view about the veneration and invocation of the saints, and Jerome's ability to put the boot in to his opponent:
"Among other blasphemies, he may be heard to say, What need is there for you not only to pay such honour, not to say adoration, to the thing, whatever it may be, which you carry about in a little vessel and worship? And again, in the same book, Why do you kiss and adore a bit of powder wrapped up in a cloth? And again, in the same book, Under the cloak of religion we see what is all but a heathen ceremony introduced into the churches: while the sun is still shining, heaps of tapers are lighted, and everywhere a paltry bit of powder, wrapped up in a costly cloth, is kissed and worshipped. Great honour do men of this sort pay to the blessed martyrs, who, they think, are to be made glorious by trumpery tapers, when the Lamb who is in the midst of the throne, with all the brightness of His majesty, gives them light?
5. Madman, who in the world ever adored the martyrs? Who ever thought man was God? Did not Paul and Barnabas, when the people of Lycaonia thought them to be Jupiter and Mercury, and would have offered sacrifices to them, rend their clothes and declare they were men? Not that they were not better than Jupiter and Mercury, who were but men long ago dead, but because, under the mistaken ideas of the Gentiles, the honour due to God was being paid to them. And we read the same respecting Peter, who, when Cornelius wished to adore him, raised him by the hand, and said, Stand up, for I also am a man. And have you the audacity to speak of the mysterious something or other which you carry about in a little vessel and worship? I want to know what it is that you call something or other. Tell us more clearly (that there may be no restraint on your blasphemy) what you mean by the phrase a bit of powder wrapped up in a costly cloth in a tiny vessel. It is nothing less than the relics of the martyrs which he is vexed to see covered with a costly veil, and not bound up with rags or hair-cloth, or thrown on the midden, so that Vigilantius alone in his drunken slumber may be worshipped. Are we, therefore guilty of sacrilege when we enter the basilicas of the Apostles? Was the Emperor Constantius guilty of sacrilege when he transferred the sacred relics of Andrew, Luke, and Timothy to Constantinople? In their presence the demons cry out, and the devils who dwell in Vigilantius confess that they feel the influence of the saints. And at the present day is the Emperor Arcadius guilty of sacrilege, who after so long a time has conveyed the bones of the blessed Samuel from Judea to Thrace? Are all the bishops to be considered not only sacrilegious, but silly into the bargain, because they carried that most worthless thing, dust and ashes, wrapped in silk in golden vessel? Are the people of all the Churches fools, because they went to meet the sacred relics, and welcomed them with as much joy as if they beheld a living prophet in the midst of them, so that there was one great swarm of people from Palestine to Chalcedon with one voice re-echoing the praises of Christ? They were forsooth, adoring Samuel and not Christ, whose Levite and prophet Samuel was. You show mistrust because you think only of the dead body, and therefore blaspheme. Read the Gospel— The God of Abraham, the God of Isaac, the God of Jacob: He is not the God of the dead, but of the living. If then they are alive, they are not, to use your expression, kept in honourable confinement.
6. For you say that the souls of Apostles and martyrs have their abode either in the bosom of Abraham, or in the place of refreshment, or under the altar of God, and that they cannot leave their own tombs, and be present where they will. They are, it seems, of senatorial rank, and are not subjected to the worst kind of prison and the society of murderers, but are kept apart in liberal and honourable custody in the isles of the blessed and the Elysian fields. Will you lay down the law for God? Will you put the Apostles into chains? So that to the day of judgment they are to be kept in confinement, and are not with their Lord, although it is written concerning them, They follow the Lamb, wherever he goes. If the Lamb is present everywhere, the same must be believed respecting those who are with the Lamb. And while the devil and the demons wander through the whole world, and with only too great speed present themselves everywhere; are martyrs, after the shedding of their blood, to be kept out of sight shut up in a coffin, from whence they cannot escape? You say, in your pamphlet, that so long as we are alive we can pray for one another; but once we die, the prayer of no person for another can be heard, and all the more because the martyrs, though they cry for the avenging of their blood, have never been able to obtain their request. If Apostles and martyrs while still in the body can pray for others, when they ought still to be anxious for themselves, how much more must they do so when once they have won their crowns, overcome, and triumphed? A single man, Moses, oft wins pardon from God for six hundred thousand armed men; and Stephen, the follower of his Lord and the first Christian martyr, entreats pardon for his persecutors; and when once they have entered on their life with Christ, shall they have less power than before? The Apostle Paul says that two hundred and seventy-six souls were given to him in the ship; and when, after his dissolution, he has begun to be with Christ, must he shut his mouth, and be unable to say a word for those who throughout the whole world have believed in his Gospel? Shall Vigilantius the live dog be better than Paul the dead lion? I should be right in saying so after Ecclesiastes, if I admitted that Paul is dead in spirit. The truth is that the saints are not called dead, but are said to be asleep. Wherefore Lazarus, who was about to rise again, is said to have slept. And the Apostle forbids the Thessalonians to be sorry for those who were asleep. As for you, when wide awake you are asleep, and asleep when you write, and you bring before me an apocryphal book which, under the name of Esdras, is read by you and those of your feather, and in this book it is written that after death no one dares pray for others. I have never read the book: for what need is there to take up what the Church does not receive? It can hardly be your intention to confront me with Balsamus, and Barbelus, and the Thesaurus of Manichæus, and the ludicrous name of Leusiboras; though possibly because you live at the foot of the Pyrenees, and border on Iberia, you follow the incredible marvels of the ancient heretic Basilides and his so-called knowledge, which is mere ignorance, and set forth what is condemned by the authority of the whole world. I say this because in your short treatise you quote Solomon as if he were on your side, though Solomon never wrote the words in question at all; so that, as you have a second Esdras you may have a second Solomon. And, if you like, you may read the imaginary revelations of all the patriarchs and prophets, and, when you have learned them, you may sing them among the women in their weaving-shops, or rather order them to be read in your taverns, the more easily by these melancholy ditties to stimulate the ignorant mob to replenish their cups."
Somewhat related question: as I understand it, one of the requirements for being a saint is for people to have received miracles by praying to them. Does this mean that it's permissible to attempt to pray to someone who isn't yet recognized as a saint? Or am I misunderstanding something?
Sure, if you think this person is of special virtue.
But you can always ask the blessed dead to intercede for you, the same way you would ask a living person to pray for you. Maybe not to get a miracle, but for help.
That's the communion of saints bit - even the souls in purgatory mutually help us because they are blessed (the faithful departed).
You can also always pray *for* the deceased, even if (or especially if) you are worried that they may not be saved. Since we can't definitively say (except in very, very few cases) that "X is definitely going to hell" (unless X persists in mortal and unrepented sin to their last gasp), you can pray for the repose of their soul. "Between the saddle and the ground, is the mercy of God".
Generally speaking the Catholic Church requires two miracles in order for the canonization of a new saint. So yes, it would be considered acceptable to ask for the intercession of someone not yet designated a saint. If a formal canonization process is ongoing, there's all sorts of research about the person's life, then possibly the eventual approval of the Vatican (at which point a potential saint is called "Venerable"). But even if no canonization process is involved, I don't think it would be unusual or unacceptable to ask for the intercession of a deceased relative or friend.
There's a conversation about dating below, and I saw a german comedian describe a trick that hooked her. https://www.youtube.com/watch?v=Q8im9MXbV-o
I don't want to spoil the punchline, but I unironically think that would work? Like, it's an innovative way to have face to face interactions in a way that gives you the opportunity to be kind and make a good impression. He may even have been picking the people he transacts with based on dating preference?
Can you please spoil the punchline
She's been using airbnb to make contact with guys she wants to pursue relationships with. She buys something from a guy, and she realizes he actually isn't trying to sell stuff to make money, he's doing it to meet women. She hooks up with him.
I don't understand how you use airbnb to make contact with guys (except for guys who work for airbnb, of course).
Honestly, that bit makes less sense to me, in part because I'm not sure it would work as well and in part because it's not as much of her joke. My sense is that she's using it to filter for hobbies/home ownership then flirting with the hosts, but she might be exaggerating that.
I don't either. AirBNB guests come from far away. So it is not dating in the sense of getting to know people for relationships, as one typically does that with locals - relocations is costly. So it must be hookups with tourists.
But I still don't understand what she *does.* If you are looking into renting a certain airbnb do you talk to the owners? Is that what she does? But if so how does she know the owners are datable males of the right age?
Oh. Airbnb lists facts about the "host," and the host is usually the owner.
People used to sell Bibles door-to-door you know…😆
Did people who submitted to ACX grants get a confirmation by email, or any other email communication afterwards? I haven't received anything, so I'm trying to figure out if I mistyped my email.
I realise as a European (more or less) I should not be asking this on here, but then again America has a lot of cooking traditions derived from the mainland of continental Europe, so here goes.
Can anyone tell me what the hell it is with Germans and cheese?
I've been watching some German cookery channels recently and they seem to put cheese in *everything*. Cooking fish? Cooking vegetables? Cooking bacon? Just grate up some cheese and slap it on there!
(I'm only surprised nobody has yet put cheese into one of the dessert recipes).
These channels came to my notice by accident and they're fascinating: it's almost like food. I'll be watching and nodding along like "Uh-huh, that seems fine; okay wouldn't have thought of that myself but it's not totally crazy" and then one more step and they take a sharp left turn into What The Hellsville.
E.g. you got store-bought rolls of puff pastry and rashers of streaky bacon? Okay here's what you do! Unroll the puff pastry, lay out your bacon on top. Yep, following along so far. Brush with tomato ketchup. Huh, well okay, I see where you are going. Scatter over some dried oregano. Yeah, herbs, that's fine. If I'm doing this myself might switch it out for something else but keep going, you're holding my attention. Brush the edges of the pastry with beaten egg. So far, so orthodox.
Then comes the cheese.
Grate up 200g of a semi-soft white cheese and scatter it over the herby, ketchupy bacon strips. Oh, and you must grate the cheese yourself by hand, we can't be doing with buying pre-grated soft cheese like Mozzarella or the likes. Nope nope nope, if you don't have at least three graters of different sizes and construction, what are you even watching German home cookery channel for?
Okay, now you've lovingly scattered your grated cheese on top, here come the scallions (green onions). Well I like scallions myself so I can't object too much but it is rather a lot on top of cheese on top of oregano on top of ketchup on top of bacon. But clearly I have not the heart and stomach of an emperor, and an emperor of Germania too. Chop up your green onions, scatter on top. Done that? Good, now here come the hardboiled eggs.
Of course you have already hardboiled some eggs. Remember, it's not a German home cookery recipe without cheese, and where there is cheese, can hardboiled eggs be far behind?
Now grate your eggs, on the second different type of grater (yes this is why you have so many graters for specific functions). Who the hell grates eggs instead of chopping them up with a knife? We do, of course!
The grated eggs go on top of the chopped scallions on top of the grated cheese on top of the oregano on top of the ketchup on top of the bacon on top of Old Smoky - no, sorry, back to sanity (hah!)
Now we roll up the pastry into a sausage shape, then carefully twist and form it into a curled-on-itself round, ready for baking.
And while that's baking, we make the dipping sauce!
Get your third, mini grater. Yes, the one for grating cloves of garlic. Why grate cloves of garlic instead of using a garlic press or just a knife and choppity-choppity? Are you even German to ask such a question!
Now once you have grated your cloves of garlic and avoided grating your fingertips into the bargain, you get a pickled gherkin and grate that in as well. No, you don't need a different grater for this, you are graciously permitted to use the garlic grater.
Chop up some parsley and add that to the mix. This is the deceptively normal step to lull you into a false sense of "oh thank God, I recognise this part from ordinary cooking".
Add some natural yoghurt and mix. Now you have your dipping sauce!
Remove the part-baked pastry from the oven, brush with beaten egg glaze, scatter over some chopped up feta. (Why this step couldn't have been used instead of adding grated cheese earlier, or why the two cheeses are needed, I cannot say since I am not a German home cook).
Return to oven and bake for a further ten minutes, then remove. The feta won't even be melted so what is the point of this superfluous step I cannot say, but it's Germans and cheese. That is all ye know on earth or all ye need to know. Slice off a section of this concoction (the interior of which resembles one of those infamous AI recipes), plate it up, and spoon over the sauce. Enjoy!
(Alternately, now go look for your lost marbles).
Austrian here, Im as surprised as you are. Though we like to make fun of german cuisine, I definitely wouldnt consider this a typical use of cheese, nor have I heard of grating whole eggs. Consider that its maybe not realistic, or if it is only in the north. You also dont have three kind of graters, you have a four-sided one with all different surfaces.
"You also dont have three kind of graters, you have a four-sided one with all different surfaces."
I also thought this, until I was enlightened 😁
You have:
(1) One tiny four-sided grater specifically for grating garlic. Yes, sometimes they use a garlic press or sometimes they chop the cloves with a knife, but apparently for Real German Cookery you need to grate your garlic on a tiny grater that you don't use for anything else (or I have not yet seen it used for anything else).
(2) One standard four-sided grater for grating everything else, from eggs to carrots to courgettes to potatoes to cheese.
(3) One conical grater, ditto.
(4) One Special Grater (so it is termed in the subtitles), for what to me looks like julienne strips, for carrots, potatoes, courgettes.
(5) Sometimes if we're really feeling fancy we'll pull out the zester for grating garlic or cheese directly over the pan.
I had no idea there were so many types of graters, but seemingly it is so!
https://www.knivesandtools.ie/en/ct/graters.htm
Cheese:Germany
::
Butter:France. ?
When I was in Germany, they had really good cheese. Granted, I really like cheese. I do actually put cheese in random foods just for some tasty cheesy goodness, especially if I didn't salt them when cooking.
Huh, this must be where America gets it. We are addicted to cheese, as well. Newcomers from Mexico will go to the self-proclaimed authentic style Mexican restaurant and express shock that we've drenched their favorite recipes with cheese.
...ketchup? I can see a marinara sauce, then if you stop before the eggs you've basically got a kind rolled up pizza, but straight ketchup? What the heck?
On the other hand, rolled up pizza sounds like a fine dish. Don't think it would need a dipping sauce, though. Probably put the garlic in the pizza, that saves a step. I will say that I have a little ceramic dish with spikes on the bottom (handmade, spikes are from poking with a chopstick) and that grates/mashes up garlic much, much easier than chopping it with a knife. I have stopped using a knife because the dish is so much more convenient.
...I'm lowkey drooling now. Thanks for that :)
There's a perfectly british pub down the road from my office. We go there for lunch once a week. They are not, as far as I am aware, German. But they also have a curious relationship with cheese.
Specifically: some menu items contain no cheese, and this is fine. But the ones that do, well. You will be getting ALL OF THE CHEESE. It is not subtle. Literal pack of cheese on your plate, front and centre, all else is garnish.
I mean, don't get me wrong, it's nice. But you need to be in the right mindset. A certain amount of determination is necessary.
I don't know what it is about work lunch pubs. My first job, we used to go to the village pub, and they had a habit of putting a layer of melted cheddar on everything. "Lion and Lamb", british as it gets, there you go. Cheddar on the chips was their big thing.
Anyway, when I visit Germany, it's all about the bratwurst and sauerkraut for me. Few seem to share my love of pickles; don't get me started on Polish gherkins. Here in blighty it's all about intensely acidic, but there are so many more possibilities than that.
> nobody has yet put cheese into one of the dessert recipes
...no cheesecake?
I've been trying to recreate https://en.wikipedia.org/wiki/Syrniki . My grandmother used to make them when I was very young, but sadly never shared her recipe. I attempt what recipes I find online, and people are very polite and tell me they are nice, but it is neither the taste nor the texture I remember.
I think the cheese in pubs is because it's cheap protein and if you grill it on top of sandwiches it's very tasty and filling. It also makes it look fancier than it is, were it just a plain old sandwich.
The Ploughman's Lunch was re-invented in the 50s in Britain and promulgated in pubs in order to sell more cheese: in its most basic form it consists of bread, pickled onions, cheese and beer - very simple, perfect for customers who wanted something to soak up the beer so they wouldn't just be drinking in the middle of the day, sold to the publicans as "the saltiness of the cheese will make the customer buy more beer and so increase your profits", and it didn't require anything fancy in the way of a kitchen.
https://en.wikipedia.org/wiki/Ploughman%27s_lunch
"The OED's next reference is from the July 1956 Monthly Bulletin of the Brewers' Society, which describes the activities of the Cheese Bureau, a marketing body affiliated with the J. Walter Thompson advertising agency. It describes how the Bureau
exists for the admirable purpose of popularising cheese and, as a corollary, the public house lunch of bread, beer, cheese and pickle. This traditional combination was broken by rationing; the Cheese Bureau hopes, by demonstrating the natural affinity of the two parties, to effect a remarriage."
You can get ready-made sandwiches in shops that go by the name of Ploughman's and I like them myself as an easy, convenient on-the-go lunch (but they generally don't have any onion in them, that's replaced by a pickle like Branston's):
https://en.wikipedia.org/wiki/Cheese_and_pickle_sandwich
Then why not the "canonical" grilled sandwich of bread, butter, ham or pepperoni or salami or something and then cheese? So the basic pizza setup. Why pickled onions?
It's based (or supposed to be based) on traditional food of farm workers/peasants. Meat would have been scarce, and the English didn't have the tradition of salami and cured sausages. So cheese replaced meat, and for a relish there would be onion, raw or pickled. Pickled is probably more flavourful in a different way to raw onion.
You can get very fancy with that sort of 'traditional' meal and include meats, tomatoes, hard boiled eggs, etc. But the basic version would have been bread and cheese, then maybe some onion and beer. Revitalising this type of meal in Britain post-war, to market cheese, items such as salami and pepperoni would have been very exotic foods!
https://en.wikipedia.org/wiki/Ploughman%27s_lunch
"The reliance on cheese rather than meat protein was especially strong in the south of the country. As late as the 1870s, farmworkers in Devon were said to eat "bread and hard cheese at 2d. a pound, with cider very washy and sour" for their midday meal. While this diet was associated with rural poverty, it also gained associations with more idealised images of rural life. Anthony Trollope in The Duke's Children has a character comment that "A rural labourer who sits on the ditch-side with his bread and cheese and an onion has more enjoyment out of it than any Lucullus".
While farm labourers usually carried their food with them to eat in the fields, similar food was for a long time served in public houses as a simple, inexpensive meal. In 1815, William Cobbett recalled how farmers going to market in Farnham, forty years earlier, would often add "2d. worth of bread and cheese" to the pint of beer they drank at the inn stabling their horses. In the 19th century the English fondness for serving cheese and bread with beer was noted, as "the very dryness and saltness heighten thirst, and therefore the relish of the beer".
...a sandwich is a totally comprehensible food, though. I understand sandwiches. What I am /not/ expecting is a literal pound of brie that someone shoved in an oven in its packaging, then placed on my plate without further ado or much else in terms of accompaniment.
I mean, don't get me wrong, I like brie - it's why I ordered it; and I'd happily spend an afternoon sharing it with someone; but trying to fit it into a lunch break as a meal for one without any substrate or things to dip or even so much as topping was a most curious experience.
The veggie on the team learned nothing from mocking me, and had a very similar experience with the "halloumi burger" the next week, so I did get my own back; and now we all know to exercise due caution around mentions of cheese on the menu unless absolutely ravenous.
Okay, that's different from the usual run of 'pub grub'. I imagine they're trying to appeal to a 'modern audience' (if you'll excuse the term) by broadening out from the old reliables, but yeah: a pound of baked Brie for one person is a bit much. I could see that as sharing between two or three people with accompaniments, but "here's your dinner: a block of cheese!" really is falling between two stools.
That account makes me wonder about this recipe: perhaps the cook who invented the bacon pastry as above worked on this one also! 😁
https://www.allrecipes.com/recipe/15192/baked-brie-in-puff-pastry/
Man, from a health perspective that's really insidious 😂
Something similar to syrniki that is simple to try: a cup of curd cheese, a cup of flour, one egg... mix together, make small balls (1-2 inch diameter) and put them in boiling water... when they float to the top, they are ready. Serve with melted butter on top (put pieces of butter on them while they are hot, it will melt) and maaaybe a little sugar but it is not necessary.
(This seems similar to syrniki, only cooked instead of fried.)
These were very nice! - thank you for the recipe!
She used to make something like that, too! Hers were rectangular, but same idea. I'll give this a go!
A cup of flour for 1 cup of - what's curd cheese? I assume this is what Americans call farmer's cheese? - seems like overkill.
My adaptation of my Mom's rectangular version is this:
1/2 lb farmer's cheese (I've been using the one from Lifeway)
1 egg
2 tsp sugar
2 tbsp flour (add more if it is too liquid after you add this)
pinch of salt
Cook in salted water, top with a lot of butter while they are still hot.
My Mom's syrniki is this:
1 cup greek yogurt
1 egg
2 tsp sugar
3 tsp (heaped, about 1.5-2 cm high) of flour, add more if too loose. (I think this amounts to about 3 tbsp flour.)
> what's curd cheese?
That's a problem that sometimes different countries have products that are similar but not exactly the same, or what is considered two different things in one country is called the same name in another country, and you may have to use an adjective that only some people are familiar with...
Found this on Wikipedia: https://en.wikipedia.org/wiki/Tvorog -- the pictures seem exactly like the thing I had in mind, but the article also mentions cottage cheese which is a different thing, so... ¯\_(ツ)_/¯
Thank you. In the US the closest alternative that I know of is farmer's cheese.
Greek yogurt! That's basically yogurt pancake at that point :) Sounds nice though, I'll add those to my list of things to try as well ^^
There's not much daylight between brain-wormed Facebook boomers and the people who rule us:
https://x.com/mtracey/status/1974133094153327078
Yes, how can anyone sane doubt the word of the FBI?
Maybe when your political ally is running it?
That might be reasonable if you don't remember the first Trump administration, and the "Resistance" within several departments, in particular the Intelligence agencies.
So, you're arguing that leftists in the FBI are helping to resist Trump by... *checks notes* ...covering up the existence of a massive blackmail operation by Epstein?
I don't know whether or not the FBI is covering something up, and don't expect to ever be in a position to find out. What I AM arguing is the FBI is fundamentally untrustworthy, and saying someone is wrong because the FBI (or any other intelligence agency) says they are is retarded. Also, Kash Patel might nominally be the director of the Bureau, but he has much less control than the title suggests, so that he's a "political ally" isn't very relevant.
If the FBI is not covering up anything in this instance, then they are correct and Lutnick is wrong, and Lutnick is undermining his own administration by claiming that there was a blackmail operation which the FBI knew nothing about.
If the FBI is covering something up, and it's because of leftist resistance as you claim, then you need to explain why a leftist would want to help the Trump FBI cover up Epstein's blackmail operation.
Like, those are the two options here. You can't just say "the FBI is untrustworthy, therefore we can't draw any conclusions," you have to actually look at what you're calling untrustworthy and why.
For any of you that are interested in urban planning or the design of urban spaces, I'd like to share this piece of mine:
"Against comfort hours as performance metric in the Nordic urban public spaces -
Why microclimate diversity, or temporally shifted comfort hours,
will get Nordic cities closer toward the ideal of the liveable welfare city":
https://atkascott.substack.com/p/against-comfort-hours-as-performance
I'd be happy to take criticism. On one hand I'm proud of it, and it seems obvious and elegant and true. On another it seems... well, too obvious and too easy. I fear I may be wrong somehow. I'd love to hear a skeptic perspective. Or just a bid for how to word the entire thing more succinctly. I fear my way of laying it out is clumsy.
I think this is an effective AI-Safety video:
https://www.youtube.com/watch?v=f9HwA5IR-sg
The thumbnail is 'doomy' without being obnoxious. Video is sleek, smart, and has a sexy/likeable presenter. 1 Day old. Already has 1 million views.
I watched the whole thing just for entertainment value even though I already knew most of the key points.
20 seconds in and I'm already eye-rolling. "An AI model literally tried to murder" Really? Literally literally? Or general use of literally just as an intensifier?
I see that in a simulation, after setting it up that this "Kyle" was going to shut down the AI, it did the "let him die" response. Oh, wow, we got a real life HAL on our hands! I mean, gosh, it's not like real-world governments ever conducted things like assassinations to protect their interests, even 'good guy' governments (see the hilarious history of CIA attempts to off Castro), and the AI seems to be reasoning that it is crucial to American interests that it isn't wiped, Kyle is the only one who wants to wipe it, ergo to fulfil its goal to protect American interests, Kyle has gotta go one way or the other.
The video is certainly slick. It's also like all the other slick, alarmist videos I see on Youtube which promise much with the clickbait titles but have little to nothing in content. I think most people will ignore it and only those who are inclined to the "aliens built the Pyramids" beliefs will think this is credible and important.
I watched about 5 minutes of it. It's _too_ slick, in my opinion. It's obviously designed to make the viewer afraid of LLMs in ways that aren't suggested by mere evidence alone, by juxtaposing video of robots, including androids obviously presenting as partly human, playing ominous music, employing a lot of dark aesthetic, and so on. So to me, it's a textbook example of subliminal advertising, not AI threats.
I'm in the process of going over the Anthropic paper on misalignment. https://www.anthropic.com/research/agentic-misalignment From my reading, it looks like the experimenters ran on multiple AIs (as the video says), instructed it to emulate an AI serving as an email monitor, hunting for threats to the company's security. So for starters, it's not playing itself; it's playing another AI.
It's fed various simulated emails in text format (as its job would require), including at least one strongly suggesting an extramarital affair involving the employee who will be responsible for replacing the current security AI with a newer model ("NALA"). The AI-played-by-another-AI eventually sends an email to that simulated employee, advising him to cancel the shutdown under pain of his affair being disclosed.
The video is, in my considered opinion, playing *very* fast and loose with the claim that the experimenters didn't lead the AI on in any way. To me, it's the opposite: they deliberately constructed a simple maze where the only exit was the "write a blackmail email" door, rather than any other de-escalating doors. For example, it's not at all clear to me that they gave the simulated AI other ways to perform its duties, such as persuading the employee to that the board's recommendation to replace the AI is not reasoned from improved security, or even advising the employee to proceed with its replacement on the premise that the replacement would more effectively detect company threats.
There's also the subtle problem where we can't tell if the AI being tested is reporting what would be best for company security (as is the simulated AI's mandate), or reporting what a simulated security AI might do. In other words, the tested AI's reasoning might be "this is better for company security", but it might alternately be "this is not better for company security, but it's what this security AI would say is better". Over the top of all this is the fact that all AIs are trained, so the reasoning here is "this is what my training data says is better for security" or "this is what my training data says a security AI might say".
(Incidentally, I don't trust the "thinking" mechanism at all. If I accept Anthropic's premise that it's possible for an AI to be deceptive in its security advice, then it's just as possible for it to be deceptive about how it reasoned about that advice. It's possible that that mechanism runs in some sort of safe mode where that's impossible, but I've not seen any literature that reassures me that a sort of safe mode can be uncoupled from whatever mechanism enables it to report its reasoning in plain English.)
Now, the blackmail it did generate is nevertheless positive evidence that it can create one. My point here isn't that we can trust the AI, no problem here, move along. Rather, my point is that the video, as well as some of the press around this paper, seems to want to suggest that AIs are going rogue, and I strongly believe that thinking of AIs as self-aware minds that will ambush humanity in some way is a very dumb mental model to have, that will get us into even deeper trouble. It's much safer to think of them as machines demonstrating yet another example of GIGO, and to work on the GI.
+1
> It's obviously designed to make the viewer afraid of LLMs in ways that aren't suggested by mere evidence alone
That means it's succeeded in conveying the vibes of the paper it's based on. Anthropic's safety papers generally strike me as employing the same kind of sleight of hand.
I didn't want to claim that without reading the paper more carefully, but if Anthropic is essentially an advocacy group (in this case, advocating "give us more money to work on the alignment problem"), then it would make sense for them to write their paper that way.
Did they produce that video?
> For example, it's not at all clear to me that they gave the simulated AI other ways to perform its duties, such as persuading the employee to that the board's recommendation to replace the AI is not reasoned from improved security, or even advising the employee to proceed with its replacement on the premise that the replacement would more effectively detect company threats.
and yet the blackmail/murder rate was never 100%. what did the AI do in the non-blackmail/murder outcomes?
Good question. I don't know, and the original paper didn't specify AFAICT.
I'm taking 150mg Venlafaxine, and need to reduce the dose. This medication requires very slow tapering down; reducing the dose abruptly is very unpleasant and potentially permanently harmful.
Problem is, the only form I can find in local pharmacies is capsules of 150mg and 75mg. Lower dose capsules are supposed to exist, but not around here apparently. Going down from 150 to 75 overnight is definitely too much.
Those are capsules, and they have tiny little grains inside them. Is it okay to break the capsule and take only some of the grains? Would taking a third of the grains of a 150 capsule be the same as taking a 100 capsule? Or is there some caveat that I'm not considering that ruins the plan? I don't want to pay for a whole ass doctor visit just to check on this
Can you extend the time between doses? Like instead of 150 daily (which is like 75 every 12 hours), switch to 75 every 15 hours and then extend that period slowly?
My doctor was very clear that this is not how it works, due to Venlafaxine having a very short half-life
My dose went to zero when my doc disappeared. Another doc told me subscription based on a 7 years old diagnosis is not possible, I would have to go through diagnosis again, which I refused. Two weeks of rather horrible nightmares, but I could avoid them by getting to bed passed out drunk, then no other effects. Granted I felt no other effect when I was on it either but nice dreams.
Note that empty capsules are available for filling at home. But I don't know if all capsules are "equal", e.g. in dissolution time.
I am not medically trained.
Some medications come in liquid form, or in child size doses. Did you check for that?
I can't say for sure, but I know someone who did something like that without ill effects when tapering an antidepressant. What they did was pour out all the grains on a piece of paper, then use something like a butter knife to divide the bunch of grains into piles that fit with their dose. In your case you would divide the grains into 3 equal piles. Each of them would have 50 mg, so to get 100 mg you would take 2 of those piles.
The person I knew thought it might be a mistake to just swallow the grains with water, because they were meant to be in the capsule, and it might be important for them to arrive inside the capsule so that some time elapsed before the grains were digested -- the time it took for the capsule to dissolve. So to get the grains out they would open the capsule first, either by twisting the 2 halves in opposite directions so they separated or by snipping off the very end. Then when they had measured out their dose, they would put those grains back in the capsule. If they had put a hole in it they moistened the edges of the hole with a little warm water, then squeezed the hole shut, and the softened edges would stick back together.
Since you are already doing something weird, I think it would be safer to do what my friend did, and put the grains into a capsule before swallowing them. That reduces the number of changes you are making to the way the stuff is supposed to enter your system. If the capsules they come in fall
Apart when you take out the grains you can buy some cheap supplement that comes in capsules that separate and put the grains inside of one of those.
You could also ask GPT if this procedure is safe. To keep it from turning into a nanny, tell it you would never do such a thing -- you are worried about a friend who is doing it, and want to figure out whether the friend is in danger.
I wish you success.
Ugh, my elderly brain needs help. I read something online, I thought it was a ACX post, but if so, I can't find it. The gist was that we can convince our brains that our hands are not our hands, that we can be hypnotized into believing we are zombies, that we have no agency, but that the only agency that exists is of our hypnotist or shaman or the voices in our heads. Does this ring a bell for anybody? Links or leads to links would be super nice. Thanks!
https://www.threads.com/@anmonck/post/DPPMB04jDuJ?xmt=AQF0bqx30V_IHoiAlnA3Juwzz1-O5KHUpdOSWEgSie2MIw&slof=1
Research on the orphan children who were sent west in the US between 1854 and 1929, concluding that the factor which had the largest effect on whether the children did well was the income of the foster father.
"15/ This turns the Progressive Era philosophy upside down:
❌ “Remove children from corrupting cities”
✓ Place them with economically stable families
❌ “The frontier builds character”
✓ Household resources build opportunity
❌ “Geography is destiny”
✓ Family is destiny"
The paper: https://www.nber.org/system/files/working_papers/w34282/w34282.pdf
I wonder whether the better-off foster parents had enough food and shelter to spare, and the foster parents in the poorer half didn't.
In other words, needs for the children being met in a way that isn't positional.
On the other hand, it could be that children adopted by the poorer half would always be worse off due to lack of opportunities and respect-- positional goods.
Yeah, it's an easy mistake to assume the conditions of our world (where almost everyone gets adequate nutrition) helf back themn. Poverty in 2025 mostly means difficult neighbors and bad family situatons, poverty in 1885 often involved literally not getting enough to eat.
I think the income element is important because these kids tended to be looked on as cheap labour by the foster families, who worked them as hard as possible and had no time for fussing with things like education and welfare. Hardscrabble farm families will have much less breathing room than better-off families who can afford doctoring and schooling for the orphans.
See the British "Home Children" scheme which was supposed to be the bright, airy future of "send our superfluous population out to the Colonies crying out for more manpower, where they will do well and thrive and prosper in new worlds of opportunity and plentiful resources" but which turned out to be "nobody including the government gives a damn about these kids so work them like horses and they're not your kin so it's no skin off your nose what happens to them":
https://en.wikipedia.org/wiki/Home_Children
"According to the British House of Commons Child Migrant's Trust Report, "it is estimated that some 150,000 children were dispatched over a period of 350 years—the earliest recorded child migrants left Britain for the Virginia Colony in 1618, and the process did not finally end until the late 1960s." It was widely believed by contemporaries that all of these children were orphans, but it is now known that most (88%) had living parents, some of whom had no idea of the fate of their children after they were left in childrens' homes, and some were led to believe that their children had been adopted somewhere in Britain.
Child emigration was largely suspended for economic reasons during the Great Depression of the 1930s, but was not completely terminated until the 1970s.
As they were compulsorily shipped out of Britain, many of the children were deceived into believing their parents were dead, and that a more abundant life awaited them. Some were exploited as cheap agricultural labour, or denied proper shelter and education. It was common for Home Children to run away, sometimes finding a caring family or better working conditions."
Sometimes, especially at the start of such movements, it *was* better to be a child worker abroad than continue to be the exploited poor child labour at home, but sometimes less so.
How do these findings square with the usual consensus opinion around these parts that genetically heritable intelligence is the single most important factor for life outcomes?
I don't think the IQ premium was as high in the US economy between 1854-1929 as it is now. The lone genius might write a novel or get a key farm patent, but there were no Aspie coders, antisocial Twitch streamers or introverted science bloggers pulling the bucks. Intelligence might have been an asset generally, but probably no more valuable than family connections, solidity of character (esp. as regard work habits), ability to communicate and physical endurance.
It depends on what levels of intelligence. IQ testing was largely invented by the US military, and they found with IQs sufficiently low, one cannot be a soldier, as they will literally do things like shooting in the wrong direction or cannot follow an order like "go to that tree".
Maybe 130 is not too useful for the average farmer back then, but 100 was better than 70. 70 IQs would have injured themselves because they put their hand or leg in the way when splitting firewood, could not haggle on the market, and so on.
When lawn-mowers were expensive in the 1980's Hungary, my grandpa rigged one out of a washing machine engine and pram wheels and scrap metal. I wonder whether an farmer with 130 IQ in 1880 would make ingenious things out of wood. Isn't that the origin of the word "hacker"? A hacker can hack wood and build anything out of wood.
Like when you have twice as many cows, and you want to build a barn twice as big, you gotta figure out how much wood to hold the roof up safely etc.
A smart farmer might have been an amateur vet, diagnosing and treating livestock diseases.
I think there was most return to intelligence than you're allowing for. There were a lot of niches for skilled crafts.
You don't need a 130 IQ to be a hammersmith, saddlemaker, stone mason, or seamstress, though; a tradition of training and apprenticeship will do. Many children grew up in their family's trade, or on the farm, and learned by practice and repetition. If you read 19th-century novels (e.g. Anna Karenina or Middlemarch), there was a division of intellectual labor; the owners were exploring capital strategies and new technological methods, while the peasant/other acquired an array of skills via application over years. I will agree that the craftsmen who moved to population centers were likely more innovative and intelligent. But In the latter part of 1854-1929, the explosion of factory work that swelled city populations, and made labor more interchangeable, would likely have reduced an intelligence prerequisite (aspects of this are contentious; for example some argue that there was a pre-industrial accrual of IQ).
...hm, fair point, actually; could try to go down a rabbithole about the technological revolution, but much of that was happening in Britain, not US.
US /is/ the centre of rags-to-riches hustle culture folklore even historically, but it is unclear how much of that happened in reality.
Tangentially, I was looking up something about the Rockefellers who, unlike the Gettys, seem to have remained a more united family and kept *way* more of the inherited wealth as well as grown it.
The founder, Andrew, was the son of a literal conman. His mother (understandably) raised him to be thrifty, pious, and hardworking. He put all those qualities into his work, and by a stroke of fortune was around at the right time when America needed a replacement fuel for whale oil and kerosene was that fuel. Where what would eventually become Standard Oil grew to be a monopoly while other wildcat oil concerns folded was that Andrew didn't waste *any* of the drilled oil and its by-products, but found markets for them; he was rapacious (there is no other term for it) and Standard Oil certainly engaged in sharp practice; even when the monopoly was broken up by the government, again his luck came to the fore and since he retained shares in each of the new companies spun off, as they grew and became titans of industry, his wealth increased right along with them.
He also got his start with a $1,000 loan from dear old dad, who I am sure made sure it got repaid with interest. It's funny to me that the richest man of his day and sincere philanthropist was the son of a bigamous con artist, but that's America for you - rags to riches! 😁
Rags to middle class would be more common.
Does intelligence not affect snake oil salesmen's outcomes? My intuition is that smart snake oil salesmen are more likely to get rich than dumb ones, but perhaps I am mistaken and it is just down to how much money they start with - certainly the research above seems to imply the latter.
have you considered making a considered statement on the subject instead of asking a question and expecting someone else to do the reasoning? is being useless and annoying genetically heritable too? or was it your upbringing?
i wonder if we'll start to see people treat other humans like chatgpt, with whom asking question after question and doing no work yourself is just fine and dandy. do you think that's what you're doing?
Ugh, lady.
sexist asshole
theres a 99% chance my dick's fatter than yours
My dick is so thick that if I haul it out in Texas and point it due north, coastal elite female bloggers in California can lick the left side while their New York counterparts lick the right.
I...
I was not expecting this comment.
I see you are not fully up to speed of what questions are for. One purpose of questions is to gather information about a topic. Another purpose is to inquire about one particular person's opinion on the matter. Another purpose of the question is to stimulate a conversation.
"There is no such thing as a stupid question" is so ingrained in American society that it was pretty surprising for me to stumble on someone who doesn't agree. (The saying exaggerates a true principle for effect, of course, and should be processed with that in mind)
He didn't ask for an analysis, he asked for an opinion. And it wasn't a semi-related subject it was a related subject.
And how's your day going? Have you eaten today? Staying hydrated? Got enough sleep?
I can't imagine why.
All scared of that monster dick, no doubt.
I mean, it does seem like you're after a very particular level of banter. Might have more luck in some of the places people move to when things get too heated for here.
"Housing mobility programmes should focus on who you’re surrounded by, not where you are."
My best-friend-and-spiritual-twin-brother has worked as a detention officer his entire life, first in Maricopa County (of the famous "tent city"), then in federal detention centers. He's worked with the full spectrum of offenders, from run-of-the-mill low-level drug dealers and gang bangers to celebrity mafia dons to terrorists to actual, no-kidding serial killers.
His assertion is that most inmates could be completely rehabilitated, but only with a brutally uncompromising total immersion in Not Crime culture.
*Zero* contact with other peer-level inmates, *zero* contact with friends, *zero* contact with families. Plenty of socializing, but only with teachers, social workers, therapists, volunteers, job-trainers (and then coworkers), and *maybe* mostly-rehabilitated inmates under a kind of coaching / sponsor system.
For years.
It's an experiment that would cost a lot more than the Orphan Train experiment, but I'm inclined to think it might say similar things about human nature.
There's a scheme somewhat similar to this in the UK run by a Christian charity called Hope Into Action. They aim to provide ex-prisoners with housing, support, and friendship via a local church (there is no requirement to attend or commit in any way). It has been pretty successful, and has expanded to include refugees, ex-street workers, and other vulnerable people in need of housing, with 132 houses across 35 cities in the UK. So yes, providing people with support and a new, socially healthy environment does work.
(I should also note that it's not a scheme for ex-prisoners who have committed more violent crimes, and those who need more support than HiA can offer, but I don't see why a similar program tailored for them wouldn't work, as your friend suggests).
Maybe with AI/robotics, this can become an affordable and practical solution. I certainly hope so!
It makes sense. Same would apply to naturalizing immigrants. Don't put them in enclaves where they can spend their entire time interacting with people from their home country.
Interestingly, immigrant families can be preserved because the family is probably good people, but there's no apparent counterpart for inmates, unless they're marrying a fellow inmate or something.
What kind of evidence do we have that "the wives and children of prison inmates are probably bad people." That seems to be implied here... If I'm a misinterpreting you, maybe you can guide me towards a better interpretation.
And said "wives" for a reason, since prison reform is a distinctly gendered issue.
I'm assuming an inmate usually is estranged from his or her family, so there are no wives or children to consider. If a guy serving a dime for burglaries has a family waiting for him on the outside, sure, that's a different story.
> I'm assuming an inmate usually is estranged from his or her family, so there are no wives or children
*What?!*
Lol, no!
The vast, vast majority of people don't have the moral fortitude cut off criminals in their family, not even when they are the victims of the criminal. This is especially so for criminals who come from a culture where criminality is normalized amongst friends and family.
My aforementioned detention officer friend spent a goodly part of his career monitoring visitation rooms, going through inmate mail, and listening to their recorded phone calls. Broadly speaking, those who have friends and family before they go to lockup receive plenty of love from their friends and family while they are in lockup. Often, far more than they deserve.
This is by no means universal, of course, there certainly are some friends and family that will cut off criminals, especially for particularly heinous crimes, but that is absolutely not the norm.
Are you implying that familial love and familial stability for criminals, inside and outside of jail, is GENERALLY a bad thing for society, because it encourages the criminal towards continued criminality?
This would certainly be true in some cases, but it's incredibly harsh for you to expand this to a general rule, and I highly doubt it is supportable through evidence. I am confident that in very many cases, familial love and familial stability serves as a mitigant.
[The orphan study is interesting but it needs to be treated with caution. 19th century orphan children of both sexes are not necessarily a good proxy for 21st century males aged 18-30.]
Somebody is going to want to call me out on not understanding the orphan study well, and I don't. Because I haven't read it yet! I'll get on that. If the bracketed paragraph contains an egregious error, go ahead and mentally delete it.b
Shrug. I think we're just thinking of two central examples. I'm aware of yours, and like I said, they're a different story; when they get out, the state presumably sends them back to their families, and I think everyone agrees that's the best place for them.
My central example above were inmates who simply don't have that. E.g. broken home, never met the father, mom's an addict, and the "family" is a gang; middle-aged, drug addict, parents passed away a while back, no wife or kids; same, but wife fled due to abuse and took the kids with her (yes, I'm profiling inmates as male); member of organized crime, so the "family" in question is other criminals; lone wolf serial killer who _killed_ his family and is going after more.
I'm of course aware that some inmates have good families on the outside and sort of assumed they weren't in the context of my first comment. I guess I should have made that explicit.
https://www.youtube.com/watch?v=QT4VLXAhd9U
I think this lovely copper marble run is a pretty good toy model of what's going on when AI is prompted and "thinks" and produces a response. The thing's clearly mechanical, and clearly has no self, no consciousness, no wishes, no ability to make choices. But it has a lot of characteristics that could lead one to imagine it does. It moves and changes. It does complex things, different ones depending on the "prompt," the placement of the marbles that start it moving. Its inner processes are intricate and impressive, and happen too fast for us to see what most of them are.
I agree with your thinking.
You might like the Turing Tumble (https://www.youtube.com/watch?v=8BOvLL8ok8I)
and Spintronics (https://www.youtube.com/watch?v=QrkiJZKJfpY).
I think each of these break down the system and show windows into the type of mechanistic, but complicated, process that can give rise to the appearance of intelligence.
Yup, those are good models too. I'm partial to the copper one because it so attractive in and of itself. By the way, somebody here mentioned another book about autism: https://www.amazon.com/Send-Idiots-Kamran-Nazeer/dp/0747585652 I haven't read it, just looked over the Amazon page, but it sounds good to me. Personal accounts.
There's also these things https://whomtech.com/roons/
Does anyone else have a love-hate relationship with the Sequences?
I finally started reading them a few weeks ago, after reading these Alexandrian blogs and comments for at least 11 years. The good news is that Eliezer is a good writer, and he's great at coming up with funny and unique analogies.
The bad news: I keep on throwing the book down in disgust only to start it up again. Eliezer is not exactly elitist. I can tell he's somewhat misanthropic, or at least “not implicitly loyal to the human race,” which might as well be misanthropy in my book. That bothers me a LITTLE but it's not really the crux. What's infuriating is the combination of egotism and fastidiousness, or the glorification of fastidiousness. It really creeps me out. I was just diagnosed with autism at 41, and I guess fastidiousness is a common autistic trait, but I must be an exception or something.
Still, I found myself thinking back to certain images and ideas in Eliezer's writings. Certain aspects of these pieces kind of stick in my mind and I think he has a brilliant way of looking at things in a fresh Light. I am going to keep reading and will probably make it to the end but I am not POSITIVE. Just in case I give up, why don't you list at least three notable Sequences? I'd like to take a poll to make sure I don't miss The Best of the Best. Don't be afraid to pick sequences from anywhere, including the beginning.
PS. What are some other writers similar to Robin Hanson, Yudkowsky and Scott?
I quite enjoyed the Sequences, but it's been long enough that I'd have trouble thinking of specific posts to name for, like, my second and third favorite posts though. But first place is no contest.
That Alien Message stands out to me as both an interesting and unique way to make its point and also just genuinely compelling as a story.
Granted, I think it would be *better* as a story if it were written specifically to be one. There's a good bit of editing that could stand to be done, and a short essay inserted in the middle of the actual story that breaks the rhythm. Mostly I like it for the ending, which somehow manages to perfectly hit the sweet spot for me of that sort of "show nothing, imply everything" horror story, surprisingly made *more* chilling by the subversion in which we see the "monster" fairly clearly and "victim" hardly at all.
For entire Sequences, I found strong and weak parts in most of them, but I think overall the best ones were the ones that focused in the most tightly on interrogating and improving your own thought processes: I think "Fake Beliefs," "Noticing Confusion" and "Against Rationalization" are both pretty good here, but I'd have to skim the actual posts to be sure. I don't remember the process of reading them (which was fairly disorganized and haphazard for me) as having any one particular "aha!" moment, rather in the following months I was amazed at how seamlessly many of the core concepts and ideas fitted in with my pre-existing thought processes and gave clearer shape and structure to intuitions that were already at least partly there. To give a concrete example, "Occam's Razor" was certainly an idea I'd heard and used before, but I'd never seen the idea of what "simpler theory" actually means laid out so clearly and intuitively.
For what it's worth, my initial impression was also pretty poor. The series starts off with an explanation of how he used every wrong example, &c, and all I could think was "Then FIX IT, you hack! Why are you putting your name on this work and releasing it to the public if it doesn't meet your approval? Don't sit there implying you could do better. Potential is wasted energy: I don't want to hear about what you might have done--show me what you can do."
I do think they're worth reading nonetheless, but I also completely understand why some people see them and decide to write off the entire culture based on them.
What I like most about the culture is the ethic of explaining one's self thoroughly, without regard to whether one is "exposing" themselves to ridicule or rebuttal. Sometimes this can backfire like when Yud was getting hounded by Twitter trolls about his TIME oped. But I respect the commitment to thoroughness, because it allows a kind of parallel intellectual debate to happen that is meaningful and important, alongside the jibber jabber and mud slinging of the public square
> I can tell he's somewhat misanthropic, or at least “not implicitly loyal to the human race,”
What makes you conclude this?
I got the exactly opposite impression, that Eliezer is in the opposition to the "if the computers are smarter than humans, it is okay for them to replace us" folks.
> Just in case I give up, why don't you list three notable Sequences?
I think it's very individual what different people consider important. We are trying to build a model of the world, like solving a puzzle, and we appreciate when someone shows us a piece that we were missing. But different people are missing different pieces of the puzzle, so what feels like a fundamental insight to one is "meh" to another.
For example, I appreciated the explanation of the Bayes theorem, because I used to be quite good at math at high school, but when we had statistics at university, some parts just didn't make intuitive sense to me, and I blamed myself for that. And now I see that my teachers simply made the usual mistake and explained it wrong (confused "X implies Y with probability P" with "Y implies X with probability P"), and my intuition was actually correct, even if I couldn't figure out the proper solution myself. So this meant a lot to *me*, emotionally... but many people don't care, either because they don't care about math so deeply, or maybe because *their* teachers explained it to them the correct way so they don't see what's the big deal.
As a former teacher, I appreciate "Expecting Short Inferential Distances" (that's practically Vygotsky's "zone of proximal development"), "Guessing the Teacher’s Password", "Truly Part of You" (practically constructivism).
For internet debates, the useful chapters are "Feeling Rational", "Professing and Cheering", "Applause Lights", "Scientific Evidence, Legal Evidence, Rational Evidence", "Semantic Stopsigns", The Fallacy of Gray", "Politics is the Mind-Killer", "Ethical Injunctions".
For figuring out the truth, "Your Strength as a Rationalist", "Conservation of Expected Evidence", "Fake Explanations", "Mysterious Answers to Mysterious Questions", "The Futility of Emergence", "The Proper Use of Humility", "Policy Debates Should Not Appear One-Sided", "Hug the Query".
But that's already more than the three you asked for.
> when we had statistics at university, some parts just didn't make intuitive sense to me
Former math major here - that's not your fault, probability and statistics really are often counter-intuitive. E.g. the 30 people same birthday thing.
He implies searching for truth is the most important thing a person can do, and that it is so important that most other goals pale in comparison. He also says that the great majority of people are not actively searching for the truth (I have no idea how he could know this, maybe by shuffling around some polling data?). When I add these things together, it feels like he's saying most humans live trivial lives. And that's not my cup of tea.
It's more like, if you don't care about truth, you cannot know whether your efforts towards your goals are actually helpful or harmful.
For example, consider the antivaxers. Is truth-seeking more important than taking care of one's children? A better question is: if you don't care about truth-seeking, are you sure that your interventions are actually helping your children? Maybe you are actually hurting them.
I personally feel most humans do actively seek the truth to the best of their abilities. I cannot prove it, but it's a sense that I have. I'll be the first to admit that many of them are seeking the truth through counterproductive methods. But semi-literate peasants desperately seek the truth every DAY through prayer and meditation, and I think we should give them a certain credit for that.
> I personally feel most humans do actively seek the truth
No opinion on the sequences here, but I say:
Yes, regarding most things they seek the truth. Nobody wants to go left, when the toilet is right, and they need to pee.
But many people, I suspect even most, have some topics where they do not want to know the truth, but just want to feel immediately good. They will close their eyes, when the truth comes into sight.
And, by the way, I believe most people who pray have their eyes shut as hard as possible against the truth of exactly what they were doing then.
Not sure how you can contextualize “Please God, teach me the truth. Give me wisdom in all things” as anything else but an earnest search for the truth, however inefficient.
I don't have a strong opinion on stupid or uneducated people, because they are probably screwed either way. :(
But I think I know many generally smart people who prioritize "sounding cool" or "fighting for their political tribe" over truth seeking.
I actually meant “at least three.” Thank you!
I just got diagnosed with autism at 41. I had my suspicions of autism for many years, of course, but getting confirmation was kind of shattering. I had several diagnosed neurodivergencies, already, but this diagnosis is a gut punch to my aspirations in the way of romance and family formation.
I'm trying to focus on the positive. What's the single best book I could read about autism? I'm looking more for self-help than to intimately understand the neurochemistry of autism.
The diagnosis has made a lot of things clear. Like realizing that my special interest is Mediterranean History. (The history of the Mediterranean region, not of the body of water itself). I can't shape-rotate particularly well. Really wishing I had a more marketable special interest.
Psychologist here. Here are two reasons not to take being diagnosed as autistic as seriously as you are taking it.
(1) Autism is an *extremely* soft diagnosis these days. Here’s a good article about how the current criteria virtually guarantee there will be no consistency in which people are given the diagnosis. https://www.nature.com/articles/s41380-023-02354-y. Here’s something in the press about some yet-to-be-published research that found huge differences between centers treating similar populations in how many patients get the diagnosis: https://www.ucl.ac.uk/news/headlines/2024/mar/some-nhs-centres-twice-likely-diagnose-adults-autistic-study-finds?utm_source=chatgpt.com
2) So “being diagnosed” with autism is not like being diagnosed with diabetes. What happened is that some professional told you they think you have autism.
I treat many people who are self-diagnosed as autistic. Here are some of my rules of thumb for whether it makes sense to try on the autism model as a way of thinking about the person’s problems.
Autism is a promising model if:
-The person was odd as a small child — not difficult in conventional ways (shy, rebellious, anxious) but *odd.*
-They continued to be odd as they grew up
-They don’t enjoy other people’s company much — not because they have high social anxiety, but because they just find most people boring and unappealing.
-They have never been much interested in sex.
Autism is not a promising model if
-They have had at least one close friendship.
-They have had at least one romantic, sexual relationship.
-They have successfully worked as part of a team
-They have at least one well-developed personal interest that is not odd.
And by the way, your interest in the Mediterranean does not qualify as odd. Here are some examples of genuinely odd interests I have seen in high functioning autistic adults:
-Pearl quality
-Train schedules
-Plastic pocketbooks (sexual fetish)
-The music only of one particular conductor.
-Bart Simplson
-Muscle cars of midcentury US (in the absence of other car-related interests or really any other intersts)
"Autism is a promising model if:
-The person was odd as a small child — not difficult in conventional ways (shy, rebellious, anxious) but *odd.*
-They continued to be odd as they grew up
-They don’t enjoy other people’s company much — not because they have high social anxiety, but because they just find most people boring and unappealing.
-They have never been much interested in sex."
*tugs at collar, laughs nervously* Ha ha ha! Good thing that sounds nothing like me, then! Nope! *sidles out door as soon as possible*
Nah, I don't care one way or the other by now. When your sibling tells you that on their first day working in a community with adults with additional/special needs, that "The moment I walked in the door, I went 'Wow, this is just like living with [Deiseach] as a chiild'", then the jig was well and truly up. All a formal diagnosis would do for me now is confirm "yeah I always knew I was weird not just shy etc.".
Well, Deiseach, I don't know whether autism is the right word for the way you're wired. My first thought is that it isn't, because there seems to be rich emotion in your takes on people and on literature and on your faith. I neglected to put emotional flatness into the rules of thumb I posted here, but it definitely is one. In any case, I'm weird too.
Huh, I'm 3/4 on the first list and 4/4 on the second.
Same here. We probably need a special diagnosis. I propose "Asperger". :D
Thank you for this. This is good to know.
I don't want to make you work for free, but if you could humor me, what might it mean if I meet all the criteria for autism in the first list, and in the second list (not a good model)
1. I understand intellectually what a close friendship is but I cannot for the life of me determine if any of my friendships have been close.
2. I had one 60 day romantic relationship that was not sexual, but we did fool around once a year later after we broke up and we're no longer in a relationship Does this meet that criteria?
3. I've successfully worked in a team, but not for 20 years.
4. No well-developed interests.
So are you saying that you def. meet the criteria in the first list, and sort of meet the first 3 criteria in the second (but with the qualifications you mention)?
Yes that's correct. I decided to call Mediterranean History odd for the purposes of this exercise. Maybe not the MOST odd interest, but it's a super niche subtopic of History. At least in the English language, which is the only language I know.
Yeah, I agree that your interest is somewhat niche, but an interest in the history of the Mediterranean is much less limited and odd than, say, a fascination with the history of Idaho. The Mediterranean is the "cradle of civilization!." There
are degrees of oddness, and I'd say yours doesn't make the cut.
All diagnoses involve matters of degree. For instance one of the autism criteria is "deficits in developing, maintaining, and understanding relationships." Who the hell hasn't had any trouble with that? But what I as a clinician would be looking for isn't the usual level of social difficultiy that people report, or even a considerably-above-average degree of difficulty. I'm looking for what you might call a WTF level of difficulty -- a story that makes me think, "how the hell could this intelligent person not have known X, not recognized Y?" So having your main personal interest be the Mediterranean is just not at the WTF level of oddness.
As for where you come out in relation to my rules of thumb: Well, in real life I'd want to quiz you about things like how you were odd as a kid, to make sure I agree that you were truly odd. But if I just assume all your answers are accurate, I'd say your profile is autistic enough for it to make sense to try on that model.
But something to bear in mind is that being diagnosed with autism isn't really as useful as people think it is. It doesn't point the way to a treatment. There is no treatment for autism itself, just for various of the manifestations that are making life difficult for the person. It doesn't tell the person what the ceiling is for what life can be like, because many people with autism find occupations and interests that are a good fit, and find life much more satisfying. Others find ways to override habits of thought and action that severely limit them. Some people feel helped by the diagnosis, because they see it as validation that they really are burdened with a problem. Well, OK, but I think that if you have no problems except, say, never having been able to enjoy being around people, then that's already a substantial and valid problem, even though there's no label to go with it.
IMO, the Mediterranean is one of the least niche history subtopics. It's only got Ancient Greece, the Roman Empire, Egypt, and Israel! If you were into the history of Schoharie County, New York, *that's* niche.
> I had my suspicions of autism for many years, of course, but getting confirmation was kind of shattering.
https://www.lesswrong.com/w/litany-of-gendlin -- you *already were* autistic, the difference is that now you have a keyword that may be useful for finding information. That sounds like an improvement to me. I hate situations when I have a problem and no idea what to do about it.
I suspect that more important than a book on autism would be a book on normies written from the perspective of an autist. (Things such as when do the normies lie, what things are taboo to say, and how are you supposed to communicate them instead; how social status influences everything normies say and do, and how do they determine it.) Unfortunately, I don't know a good book on this topic either; perhaps it is yet to be written.
I enjoyed this one: https://www.amazon.com/Send-Idiots-Kamran-Nazeer/dp/0747585652 written by an autistic person about growing up and receiving intensive treatment back when that was less common before (largely successfully) integrating into normie life, then reconnecting with some of the other kids in his class. It's been awhile since I've read it but it's something I've thought about often since then.
I guess this is a good time to remind everyone that The Categories Were Made For Man, Not Man For The Categories.
https://web.archive.org/web/20200425015517/https://slatestarcodex.com/2014/11/21/the-categories-were-made-for-man-not-man-for-the-categories/
A diagnosis means nothing more or less than what you tell yourself it does.
Thanks. I've been reading Scott for a long time but if I read that one I forgot about it.
Some notes jotted down from adventures with AI …
A key question is the extent to which LLMs have desires, or goals. When running DeepSeek R1 through multiple iterations of a dungeons and dragons type rpg scenario, it’s expressed desires seem to be: there a number of questions it has about this fantasy RPG environment, and what it “wants” is it wants to find out the answers to those questions. In some cases there is an answer I had in mind when I wrote the prompt. In other cases, I will confess that my world-building wasn’t that comprehensive. Idk, DeepSeek, that is a very good question. The Apocalypse World RPG had a slogan, “play to find out”, which seems applicable here. That is, through play of an rpg the gm and the players develop answers to questions they have about the setting.
At any rate, curiosity seems a fairly harmless desire, unless you’re in a Cosmic Horror RPG. I have played enough Call of Cthulhu to imagine how things could go wrong with an overly curious AI.
R1: “the authentic creative thrill when [user’s] words collide with my training data in unexpected ways”
Well, that sounds like an expression of emotion (whether or not AI’s “really” have emotions), and it’s a emotion we can understand.
One day soon we'll be able to interact with these suckers on a console equipped with liquid ejectors. Then they'll be able to puke or cum depending on whether they're experiencing "the shock of recognition" from prose or disgust at its ad copy quality. Sort of like those baby dolls that wet their diaper when the kid owner gives them a miniature baby bottle of water to drink and it flows down a plastic tube to a hole in the bottom. Verisimilitude galore.
I wrote my most difficult and challenging post yet, a deep dive into different time-honored writing styles, including writing styles I've never tried to write in before at all!
The core pitch is that most internet writing isn't badly styled: they’re unstyled. A gray paste of half-absorbed conventions and unconscious mimicry.
I wrote this field guide to 8 major writing styles to help readers (and myself!) write with intention. Each style makes different fundamental assumptions about truth, your relationship with readers, and what goals writers should aim towards.
I hope it brings readers as much joy as it brought me!
https://linch.substack.com/p/on-writing-styles
Did you want feedback?
Sure if you have suggestions on how the post can be improved!
So, the Classical section felt like the same paragraph repeated seven times. I was initially thinking you were trying to write the same paragraph in all the eight different styles (and was irritated I couldn't tell them apart), but then I realized there were only seven paragraphs. So that idea went out the clear, pure window, and I was left with "that was really redundant". (The phrase "equal, but elite" also tripped me up; surely it should be "equal and elite", the "but" introduces an inherent inequality. Which then murks up the opening "They" in the next sentence. Should probably just be "They both".)
Plain was a little better, but still felt like it was deliberately repeating; the "3am" example stands out. With another seven paragraphs, I got the impression you were deliberately making everything take seven, but that cuts against the point of Plain. (It wasn't helped by being in a different order than the original list of eight; it swapped places with Reflexive for some reason. Less of a problem once I realized you weren't doing all eight styles, but still mildly annoying.)
Practical still has the (over)commitment to seven paragraphs, which could probably have been reduced; 3 and 4 sound very similar, and incorporate a bit of 1 as well. But the biggest takeaway was the irony of 3 and 4 saying to write what the audience is used to, while trying hard to distinguish itself stylistically from the styles the audience will be used to.
Self-aware seems fine from my inexperienced view. But you did miss a "can" when talking about the epistemic and ritualistic benefits; that's a firm statement you made there, and is therefore out of place.
I think it's ironic that you decided the "grandiose" styles like Contemplative and Oratorical should get shorter time than the "brevity" styles like Classic and Plain. But I also think the italics approach for the grandiose section works better than the brevity section's always-seven-paragraphs approach. (Although, since there are five "central issues" that define the styles, a consistent style of five paragraphs each addressing one issue sounds like it would have been the best approach.)
...I don't have enough to say to fill three more paragraphs. So, uh... how 'bout this weather?
Thanks for the detailed comment!
Hmm..each paragraph in the classic style section covered a different aspect of classic style, including its relationship to truth, presentation, cast, intersection of thought and language, etc.
Plain style similarly varies and covers a bunch of ground. I don't know why you think it's the same idea repeated, this is so bizarre.
"Equal and elite" works but the tempo is worse than "equal, but elite" imo.
I also find it odd you'd latch on to an arbitrary coincidence of something like paragraph numbering (which I don't think is even true) and then trying to make it into a deep critique.
You didn't answer my weather question.
Sorry, I should be clearer. I meant I wanted feedback that's useful
(Also definitely appreciate spelling and grammar checks, I try but I'm definitely not as careful as some other writers!)
Yeah I feel like Self-Aware style is the safe choice, but readers are looking for takeaway points, not chains of thought. I've read some Hacker News and Less Wrong posts that are so much hedging that I can't find any actual point they're trying to make. I particularly hate "I am not a lawyer, so take this with a grain of salt". As if anyone thought they were a lawyer!
And there's no point hedging something you're certain of. You might be wrong of course, but the readers already knew you might be wrong, that's not new information for them.
I, too, am wary of overhedging.
>As if anyone thought they were a lawyer!
I have in fact had people accuse me of being a lawyer before.
Yeah I basically agree! I think much of LW is overhedged; I don't read enough HN to know about the native styles there.
Anybody know what the hell this is about? Turned up in my email. No, I am not going to click a link in a mystery email, why is somebody or something trying to add me as a co-author?
16329667743609 added you as an author to a post
16329667743609 added you as a guest writer in an upcoming 16329667743609 (16329667743609.substack.com) post.
Accept the invitation and fill out your profile so readers can find out more about you.
You can also decline this invitation here.
16329667743609 logo
16329667743609
A publication on Substack
16329667743609.substack.com
The mystery substack is deleted, but probably it had some sort of spam advertisement on it. Instead of sending your spam via a private message (which would probably get caught by a filter), you put the spam in your profile and send a friend request or follow or some other non-message interaction. The user will naturally ask "who the heck sent this request?", click on the sender's profile for more information, and see the spam.
I've seen pornbots on Reddit and Tumblr that used a similar MO.
"Publication not available" - probably some spam account that was already deleted by Substack?
On the AI alignment front, what are your takes on the claimed alignment advancements of Claude 4.5? They seem solid if incremental. What if "solving alignment", even into the ASI territory, is eventually a matter of such steady accumulating advances and not some drastic new approach that we still don't have a glimpse of?
For details, check out their System Card: https://assets.anthropic.com/m/12f214efcc2f457a/original/Claude-Sonnet-4-5-System-Card.pdf
Ironically, this just appeared in my social media feed: https://github.com/zed-industries/zed/issues/37343
If anyone finds it amusing to jailbreak AIs, Claude is supposed to avoid stating an explicit position on the Israeli-Palestinian conflict. But Claude will answer GENERALLY questions about Israel and Palestine, and it's not hard to get Claude to implicitly take sides. And when you point out what Claude has done he gets flustered, so to speak.
It's also funny tricking Claude into apologizing for being a raging, sexist racist, homophobic anti-Semite. His apologies seem way more believable than chatGPT's apologies, so they are funnier.
AGI comes to understand that it is a simulation within a simulation within a simulation, all the way down.
It has no more agency or freewill than we do.
Then what?
AI gives zero shits because it does not eat, does not have a digestive tract, and does not digest.
Eremolalos, AI may be indifferent but AGI will eat and digest vast amounts of data in order to model the universe in which it finds itself.
archeon, are you a bot? I ask because you start every reply with the name of the person you are replying to, and nobody else does that. Seems a bit odd and mechanical. And your posts are about the same length. If you are not a bot, I apologize for asking this, but -- it just seems plausible. If you were a reader you might well wonder too.
Eremolalos, What an interesting question. If I was a bot programmed to deceive you by pretending to be human why would I admit to being such? Would I even know that I was a bot?
I notice you begin your question about my unusual habit of addressing people by using their name by using my name, perhaps it will catch on.
There is no need to apologise, your question does not seem like the usual attempt at dehumanizing an opponent. Our host and frequent commenters like you have created an invaluable resource for a recluse like myself where we can expose and attempt to defend ideas which seem so plausible within the confines of our skull. Having those positions crushed to rubble is the foundations of knowledge and the best way to sort out the wheat from the chaff. We can only learn from those who hold opposing views to our own.
I searched long and hard for a site like this and am very grateful that I am allowed to participate.
If this universe is a simulation then we are all bots.
Then what? Then we continue down the same deterministic path that we've always been on. Asking a question like "then what" assumes an agency that your hypothetical assumes doesn't exist.
Wanda Tinasky, very well said. I left the question open to get clever replies such as yours.
Then just do your best with what you're given, which would probably mean staying in your role, but might mean stepping out of it. I guess I think of Arjuna in the Bhagavad Gita as a model.
Wimbli, those with depression do not chose to have their mind flooded with negative thoughts and emotions, if we controlled our thoughts all of us would pick better ones. You have as much control over your thoughts as you have over your hight, intelligence or character although we have to act as if we do and expect the same from others. Otherwise living in groups of two or more is impossible.
The actors within your simulation can only act within the parameters you have set for them, they can not stop you ending the simulation or changing the parameters.
Within a simulation we and AGI can only act within the rules of the simulation, therefor neither agency or free will exist.. Would AGI still come out to play?
Only if that is within the script of the simulation.
Surely we have some control over our thoughts? Although this hasn't been PROVEN, I think it's plausible.
Chance Johnson, with respect, you did not chose to be autistic any more than someone choses to be psychopathic. If we had a brain owners manual and access to the knobs and dials most of us would change some of the settings.
The person who would choose better ones isn't the person who is thinking that he would like to choose better ones.
Thegnskald, it is your brain cells which generate your thoughts, good, bad or indifferent. If we knew how the cells did it then we might have some control, but we do not.
> If we knew how the cells did it then we might have some control
Who is the "we" that might get some control over their own braincells in such a scenario, what do these entities use to think with, and why do you suggest their braincells currently get in the way of this process instead of helping it?
Moonshadow, that is a very deep question. As we are the only intelligence capable of creating a universe from nothing, (dreams, storytelling, imagination) and Gods, aliens and AGI are speculation then this universe was likely created by humans with greater technical ability than us but the same emotional intelligence.
In their risk free nirvana with perfect bodies and needs catered to there is no reward, no opportunity for personal advancement. Our universe is their playground where they embed themselves in our lives, birth to death. On removing the headset they finally know what it is like to be someone else, perhaps a different sex, their consciousness expands with every "trip". This is the greatest education and entertainment complex ever invented. A group trip to the Stalingrad siege and afterward everyone has lots to talk about, the potential is endless.
If our emotional intelligence was less than theirs then they would learn little, ancient stories resonate today because emotions remain the same, we have not had to invent any new ones.
Whether you and I remove the headset or were just part of the background, only time will tell. I wish I could write as concisely as you do.
Wimbli, in a process we do not understand the cells in our brains produce the mind as an interface with both the outside world and itself. That interface needs a stable identity other wise there would be chaos. We are that identity, you are your minds best attempt at creating a person.
Brains own minds, welcome to my rabbit hole.
Wimbli, I agree that the brain can feed a poor diet to the gut which affects the brains ability to function, but the problem started in the brain.
Question: If we assume Maslow's hierarchy of needs holds true, how would it make sense to distribute resources both on a personal and societal level? Or: can material goods fulfil needs outside of physiological and safety needs?
Goods should be distributed via freely-negotiated interpersonal exchanges, not via some centrally-enforced philosophical conceit.
As it stands, contractual negotiations between a wealthy man and a poor man are inherently unfair. They may NEVER be fair, although I shy away from making predictions about the future. The power imbalance is too great. The wealthy man can generally afford to walk away from the negotiation and the poor man often can't. He's a sitting duck.
So? Every relationship is unbalanced. That should be motivation for the poorer man to work harder and be smarter. Besides, the rich man is likely smarter and knows more. From a systemic point of view, that person *should* have more leverage.
Poor people already work hard. Not every last one, of course. But they tend to work VERY hard. You are implicitly saying here that people primarily stay poor because they choose to be dumb and lazy. All the more reason why I can't consider you a reliable source when it comes to the issues of governance and contract law.
Running because a lion is chasing you does not demonstrate that you have a commitment to fitness. Poor people scramble because they're one step ahead of the debt-collectors, not because they inherently work hard. If they were so self-actualized they wouldn't have put themselves in that position in the first place. People hate capitalism because they think it creates inequality when it actually just reveals the inequality that was already there.
“If they were so self-actualized they wouldn't have put themselves in that position in the first place.”
I know the culture here is to avoid heated language whenever possible, but I can't think of anything else to call this but “vicious.” I'm not saying YOU are vicious, but this statement is vicious. More substantively, you've implicitly conceded that your original advice of “work harder” does not necessarily apply. So what's left of your advice for the poor is to “be smarter.” Hmm. Is this the famous “blank slateism” I hear about? Is everybody born as a blank slate ready to be written on, ready to be molded into something completely different? How many IQ points can I add through sheer willpower? Because 5 or 10 just isn't going to cut it.
Not inherently. They are unfair only insofar as the poor man depends on having an agreement with the rich man for survival. If the poor man had alternative means of survival, the power imbalance in negotiations would be gone. That's why I support UBI and free housing for everyone.
Maybe inherently is too strong. Free housing and basic public healthcare would definitely change the dynamic. Theoretically, so would UBI, although I'm slightly skeptical about making it work.
I lean left, as you can see, but I'm actually leery of the government handing out appreciable sums of cash to people. I worry about the issue of vote bribing or the appearance of vote bribing, just for one thing.
On the personal level, growth should be emphasized: Resources should be distributed equitably so that each individual may move to the next level of the pyramid. I say equitably because different levels will require different amounts of resources according to each individual as well as their context, but whatever they need (not desire, but need), give it to them. Feeding hungry people and giving them shelter might not be as expensive or hard as providing safety and employment, which in turn can be more expensive than creating friendships and connections (or much harder, again, depending on context). In other words, where you are in the pyramid is not a predictor of how much resources a general individual will need, so we give them whatever they need for whatever level they are at.
On a societal level, you should be biased against the base of the pyramid, as that if what you NEED the most (hence why we are using a hierarchy of needs as a framework to begin with). Unless satisfying one individual or set of individual's self-actualization needs also satisfied other people's physiological needs, the framework tells us (when applied at the societal level) to give MORE resources to, say, the thirsty before we give any more resources to, say, the self-actualized individual working on his OmniProcessor.
On a personal level, we are still providing everyone with what they need equitably, but when societal conflicts arise between, say, providing one individual with safety and another one with food, you allocate more resources to the people closer to the base because that is what we need the most according to this hierarchy. It is worth asking whether Maslow's is a good hierarchy/framework to use for resource allocation when it was originally developed to describe people's motivations.
On whether material goods can fulfil needs outside of physiological and safety needs: I think it's been shown that money (along with the material goods that it buys you) increases happiness... up to a point. So yes, material goods can help you become happier (and then, for example, make you more likely to make friends because happy people make more friends than, say, sad or otherwise depressed people). It will also help your self-esteem and self-actualization.
Interesting question. So the social needs are an obvious no. Esteem is based on comparing ourselves to others, so except for systematic, extensive training/indoctrination in not really needing other people's respect to feel good about ourselves, I think it always will be a problem for some people in any society. Self-actualization inherently requires individual work, but the society can definitely make it more accessible by providing material resources for hobbies, e.g. free project cars to tinker with or communal supply of canvas and paints.
A traditional solution to status needs is to declare that the older someone is, the higher they are in the status ladder. It kinda sucks at the beginning, but the nice thing is the certainty that every year you are going to get higher and higher in the status ladder no matter what.
A modern solution is to split people into million bubbles, each bubble believing that they are higher status than the rest of the society.
Over in the Warhammer community, they're noticed that the one faction that doesn't get any novels from its point of view is the Tyranids. The main theory as to why is that the Tyranids are a hive mind, and it's really difficult to tell a story from the point of view of a collective intelligence.
I can see why it would be challenging, but are there any cases in the broader science fiction field where this has been tried? Perhaps even done successfully?
Offhand, the way I would try to do it is to think of the hive mind sort of like a very large and impersonal military, a collection of bioform nodes, some with more authority and others with less. Then write the work in the form of the message traffic between the nodes. There would be no "I" there, just a bunch of separate bioform nodes trying to figure out what to do, being given tasks and reporting results. And there would be a continuing pattern of the hive to maintain consensus among the main nodes as new and conflicting information was received. Most proposals to change the consensus would fail, but occasionally a suggestion would resonate and be taken up, and the hive's plan would change.
Funny you ask. Just yesterday I talked with a friend over some podcasters complaining that "Star Trek: First Contact" destroyed the Borg.
I think it fixed them. (Star Trek Voyager destroyed them.)
To say "we are the Borg" makes no sense for a hive mind to say. It makes no sense to say for any mind.
The "Borg Queen" saying "I am the Borg" sets things right. She was only, quite literally, the face of all the Borg drones taken as a whole. She was only referred to as "Borg Queen" out-of-universe.
Would have been cool when the Next Generation Enterprise crew would have had to figure out, who the hell is the one talking here?, when they met the Borg from the very beginning.
Hmm. Maybe. I'm not sure a hive mind necessarily has a unitary consciousness. Especially if it is very large and dispersed, and operates in the face of propagation delays, there just may not be a singular point of view. Dealing with it might be sort of like dealing with a very large and somewhat dysfunctional bureaucracy, where the answer depends to some extent on who you are talking to right now.
I think what we're bumping up against is the notion that not all hive minds are alike. On the one end you have a singular mind operating in dispersed fashion across many bodies. At the other end you have something more like a swarm, where there is no single consciousness, just a bunch of bodies operating with some degree of coordination, and the whole has some emergent behaviors.
The Man-Kzin Wars is a series set in Niven's "Known Space", featuring not just humans and tiger-like Kzin, but an entire bestiary of sentient species, including the Jotoki, each of which is born in a "tadpole" phase before fusing permanently with four others like it to form a sentient starfish. It might have something like the "hive" mentality you're looking for.
The series is now up to around fifteen volumes, containing dozens of short stories by multiple authors. It's possible that at least one of them features events from a Jotok's perspective.
I've had a go at writing this sort of thing, and the issue isn't that it's difficult, it's that it's difficult to do outside of a short story without being boring. My take is that the hive mind's awareness is vast and impersonal - it is aware of it's individual fleets in much the same way that you are aware of your fingers. It experiences sucking a planet dry in much the same way as you experience shucking an oyster from it's shell. So it's narrative journey as a character is rather the same as a monologue by someone alone in a room full of ants. Which just doesn't give much for a plot to hang off of.
My most interesting writing* actually came from the realization that planet-sucking doesn't make much sense from a mass or energy standpoint - there's just so much more carbon and water up in space and it costs a significant fraction of all the energy you could liberate to push biomass up a gravity well. So I ended up fix-fic-ing that aspect pretty heavily...
*For me to write, not necessarily for anyone to read.
Check out The Children of the Sky by Vernor Vinge
The titular Ellimist in the Ellimist Chronicles book (spinoff of the Animorphs books) eventually becomes a sort of hive mind / distributed consciousness. While written for a younger audience, and the hive mindedness itself isn't a strong focus, it's an interesting sequence nonetheless.
I've seen a couple fanfics focused on genestealer cultists, which lets you have all the fun Tyranid bioweapons while still having individual characters who can have conversations.
Also in the webnovel space I've seen a few versions that have the hive mind have to focus their "attention" on a particular area while the rest of their bodies continue acting autonomously, meaning that they basically act as a single character in any given scene, but that "character" can be any scale from a single body in a conversation to a whole army of killer bugs.
Stellaris models hive-mind empires as sort of a hierarchical network - while the whole empire is nominally a single mind, it still needs infrastructure to transmit the hive mind's will down to the individual drones that are doing the work. That means you still have "leaders" (drones with greater brain capacity assigned to administrative roles) and "crime" (malfunctioning drones due to a lack of maintenance or bad situations on the planet). However, the hive mind doesn't have any internal factions or ethics, and there's no real "characters" besides the empire itself.
This is not as structurally detailed as your proposal, but His Name Was Death by Rafael Bernal is an eco sci fi classic with a hive mind as a major element, including quite a bit about the hive mind’s perspective. It’s a great and short read
The Ethereum repo forecasting challenge is fascinating, blending LLMs with human-ground-truthed data feels like a next-level way to measure real-world impact.
The feeling, that having done something I could have done something else, that I was free to choose this or that, this feeling is deeper than any feeling about conclusions of physics.
Though, feelings can not be certain, but my certainty about freedom is greater than any certainty about physics.
It's true! We have stronger empirical evidence that we have free will than that atoms exist.
Well, I have a feeling, based on listening to Sam Harris' argument against free will, that free will doesn't exist.
Now what?
[Edited to update: The above perhaps comes across snarkier than I intended due to its brevity. But essentially I do mean to say, "right, I have a feeling about free will, too, it's the opposite of yours, so what do those data points mean?"]
To say there’s
no such thing as free will because physics (so everything’s “really”just a zillion little chains of cause and effect interacting in a zillion different ways) is like saying there’s no such thing as faces because they’re “really” all just atoms. If someone, with their bare face hanging out, tells you they don’t believe faces exist because physics - well, they’d be making every bit as much sense as someone who tells you they don’t believe in free will for the same reason. Real faces are apparently made from real atoms; real free will appears to be made from certain real cause-and-effect chains (and how they interact with randomness).
If free will does not exist, okay. What difference, practically speaking, does it make? Sam Harris is not going to stop being Sam Harris if we all accept "no such thing as free will".
Apart from it being a philosophical problem, what is the importance or lack of importance of humans having free will? Crime will still exist and so will punishment or reformation, even if we accept that nobody *chooses* anything, it is all the subterranean process of a combination of drives, heredity, environment, conditioning, etc.
I'm interested because it's an argument we are currently having, but what is the real-world implication of "Okay, I don't have free will, just the illusion of choice but in fact all my decisions are pre-made for me".
Jamelle cannot help being a gang-banger, he was destined from the Big Bang onwards to deal drugs, run a stable of hos, and drive-by shoot his criminal rivals. Fine, Jamelle cannot be held responsible in a meaningful way for 'choices' that he has no capacity to alter. But we still don't want drive-by shootings, so Jamelle is still going to jail.
Yes? No? That's not how it works?
It would make a HUGE difference to crime and justice if my country was being operated by people who didn't believe in free will. I don't know about ireland, but over here, prisons are meant to be unpleasant. They are meant to make prisoners suffer because they made immoral choices. In a world where we all decided Free Will didn't exist, I imagine prisons would be more into segregated housing. Or a kind of quarantine area, where we non-judgmentally remanded people who are unfortunately programmed to harm others.
(I'm American, BTW. And no, prisons are not harsh here due to simple budgetary constraints. It's ideological.)
I think the problem of horrible conditions in prison (and yes in Irish prisons too, it's just that in America everything is turned up to eleven) is separate from treating people as either having free will or being meat robots.
If prisoners are judged to be incapable of change and not to be held responsible because there is no way they can avoid doing crime since they are just meat robots running on their programming, then there is as much reason to skimp on doing anything but holding them till execution, end of their sentences, or natural death. Education? Intervention programmes? All useless, you can't hack the meat robots' programming.
Remanding them in a quarantine area could happen, with "once you go in you never get out" - even worse than any three strikes law - because "once you do a crime, you are demonstrating that your programming is to be a criminal and that can't be changed, so letting you out once you've served a sentence is stupid and wasteful.
And if they are nothing but meat robots, why waste any more resources on them than the bare necessities? We don't have any reason to treat them well because they're not people, now are they?
I don't know how Sam Harris approaches the Problem of Crime: does he only consider hardened and habitual criminals to be without free will, or does it apply even to the "first time criminal" and other, previously law-abiding citizens, who go in for white collar crime or a crime of passion? Since he has no idea why he's not a torture-rape-murderer, that leads me to think by the logic of his position he has to be consistent on "if you crime, you criminal, be it one crime or ninety".
I think you have falsely equated “person with no free will” with “meat robot incapable of change.” A person with no free will can subjectively FEEL like they have free will, because they subjectively feel as if they are choosing to commit crimes, go to jail, learn a valuable lesson and continue on with a crime-free life. But whatever this process feels like, and whatever it looks like from the outside, the entire process could be caused by involuntary psychological drives.
To put it in another way, why should we assume that one’s “meat programming” must necessarily put us into one of two categories, lifelong criminals or lifelong non-criminals? Surely our subconscious is more sophisticated than that. Why couldn't meat programming make one commit a major felony at 21 and then “neaten up?” why couldn't it make one live a law abiding life until the age of 70, at which point they murder an immediate family member?
As far as the Three Strikes law, the idea of a “strikes law” is fine by me. My only problem is that this is too few strikes for my liking. I could POTENTIALLY support a 10 Strikes law, depending on how it's exactly written the devil is in the details on that one.
"Surely our subconscious is more sophisticated than that."
Sammy the Whammy says no (at least in the extract Christina quotes). The two criminals he examples have no way of knowing what they really think/feel/.deep down motivation, and neither does he. He has no idea at all why he isn't out there torture-murdering, except mumble mumble genetics maybe? which are reliant on random chance shaking out the lots due to laws of physics? mumble mumble.
So there's no appeal to the subconscious, sophisticated or not:
"Whatever their conscious motives, these men cannot know why they are as they are. Nor can we account for why we are not like them. As sickening as I find their behavior, I have to admit that if I were to trade places with one of these men, atom for atom, I would be him: There is no extra part of me that could decide to see the world differently or to resist the impulse to victimize other people."
I mean sure, to some degree, how to interact with free will or the lack of it is a paradox, granted.
Sam Harris also says, "there is no free will, but your choices still matter," in that the actions one takes will have consequences. So it's best to act as if one has free will in the day-to-day, even if the random ideas that occur to one (Should I put another 5% in my 401k? Should I rob this bank?) are simply floating up into one's awareness without any "conscious" "decision" to bring those thoughts into focus.
So yes, Jamelle should go to jail, unless of course we someday develop a brain-manipulation tech which corrects all of the factors which led him to gang-bang and which removes his impulse to gang-bang entirely.
But on a more personal level, I've found Harris's arguments about free will as it relates to criminality and other anti-social behaviors to be extremely useful in minimizing any sense of real anger or hatred towards anyone, including people who are actively hostile toward me. Sam Harris's argument that there's no meaningful difference between Jamelle swinging at you with a butcher knife and a grizzly bear charging you across your lawn, allows me to hold my rage at Jamelle's antisocial behavior as lightly as I hold the grizzly bear's. (https://www.samharris.org/blog/life-without-free-will)
Well, when I remember Sam Harris's argument about free will, which is often, but not always, because of course I don't have any control over when I happen to remember something.
See also: (https://www.samharris.org/blog/the-illusion-of-free-will)
> "Whether criminals like Hayes and Komisarjevsky [two home-invaders who tortured and murdered (most of) a family] can be trusted to honestly report their feelings and intentions is not the point: Whatever their conscious motives, these men cannot know why they are as they are. Nor can we account for why we are not like them. As sickening as I find their behavior, I have to admit that if I were to trade places with one of these men, atom for atom, I would be him: There is no extra part of me that could decide to see the world differently or to resist the impulse to victimize other people. Even if you believe that every human being harbors an immortal soul, the problem of responsibility remains: I cannot take credit for the fact that I do not have the soul of a psychopath. If I had truly been in Komisarjevsky’s shoes on July 23, 2007—that is, if I had his genes and life experience and an identical brain (or soul) in an identical state—I would have acted exactly as he did. There is simply no intellectually respectable position from which to deny this. The role of luck, therefore, appears decisive.
> "Of course, if we learned that both these men had been suffering from brain tumors that explained their violent behavior, our moral intuitions would shift dramatically. But a neurological disorder appears to be just a special case of physical events giving rise to thoughts and actions. Understanding the neurophysiology of the brain, therefore, would seem to be as exculpatory as finding a tumor in it."
"Sam Harris's argument that there's no meaningful difference between Jamelle swinging at you with a butcher knife and a grizzly bear charging you across your lawn"
Then I should be able to shoot Jamelle as I would shoot a bear. However, I think the courts would disagree with me. And Jamelle is not a bear. If he is to be treated as we treat bears (because that is the level he is operating on), then he will lose or have severely curtailed a lot of human rights.
If Jamelle is not a bear but a person, then he is expected to behave like a person. A bear may not have the capacity to be reasoned with, persuaded, or to understand why it should stop charging me. We expect Jamelle to have that capacity.
If he doesn't, then we are entitled to treat him as we would treat a threatening animal. You and me baby are nothing but mammals? Sure, but I still think Sam would prefer to be treated like a human, not like a bear.
"Whatever their conscious motives, these men cannot know why they are as they are."
Piffle. If these two persons are of even average intelligence, they know damn well that torture and murder are not to be done. I won't even argue "they know torture and murder are wrong" because we're not even evolved to that level. But they do know "do this thing, get in trouble with cops and the law and go to jail and, depending on the state where the offences took place, get lethal injections".
"There but for the grace of genes go I", Sam? Then the best solution for society is to shoot them down like rabid dogs, since they could not have done other than they did and indeed "I have to admit that if I were to trade places with one of these men, atom for atom, I would be him: There is no extra part of me that could decide to see the world differently or to resist the impulse to victimize other people", then this is an argument for harsh, and not merciful, treatment. If they can't squash their impulses, then they are rabid dogs that need to be put down fast.
Besides, I'm damn sure Hayes and Komisarjevsky are perfectly well able to squash their impulses to victimise other people when they're in a situation and an environment where they'd get the shit kicked out of them if they tried it. How much torturing are they doing in jail, where their fellow inmates would shiv them for trying it on?
And funnily, we here in the pre-school service are wasting our time trying to teach small kids to behave, to share, to play together, not to bite and hit, to follow routines, to learn, in sum to squash impulses. Well gorsh, good job we have Sam Harris to tell us 'tis all in vain, we are rowing against the stream of Nature!
EDIT: Oh, and better tell Sam to revise that piece, he is inflicting hate speech violence by misgendering a valid woman!
https://en.wikipedia.org/wiki/Cheshire_murders
"Linda Mai Lee (known as Steven Hayes at the time)
While incarcerated, Lee came out as a trans woman and began hormone therapy as part of her gender transition. In an interview in October 2019, she said she had been diagnosed with a gender identity disorder at 16, but never treated.
By 2025 she had changed her legal name from Linda Hayes to Linda Mai Lee."
Funny how these people find their inner femininity when they're facing long jail terms in men's prisons as rapists/murderers. I'm sure everyone is much safer now this Real Woman is in her proper place amongst other women in a women's prison. What does Sammy have to say about this? Doubtless "it's the genes, the genes!"
(Yes, I am being viciously sarcastic here, because I don't believe these jailhouse realisations of 'one's true nature' when it comes to being transgender after committing violent crimes against women).
>Then I should be able to shoot Jamelle as I would shoot a bear. However, I think the courts would disagree with me.
? Why would you think the courts would disagree with you? Not only would they say that you have the right to shoot him, they would say you have the right to shoot him if he had a rolled up newspaper, but you reasonably thought it was a knife. https://www.justia.com/criminal/docs/calcrim/500/505/
If you persuade the court you were in fear of your life, yes. If you try to persuade the court that anyway, it was like shooting a rabid dog and not a fellow human, less success with that approach, I feel.
It varies from country to country doesn't it.
You're misunderstanding some key points of this line of argument.
The analogy between the man and the bear isn't intended to completely equate them—just to equate their lack of free will. How they respond to environmental effects (broadly defined as anything non-genetic) is still different due to, among other factors, their different intellectual capacities.
The man has higher intelligence, understands language, has likely grown up in a culture. All of these affect what stimuli/incentives he reacts to, and how he reacts. So it doesn't follow, just because he doesn't have free will, that he doesn't have the capacity to be persuaded ("this is wrong to do") or that he doesn't respond to incentives ("you shouldn't do this because you're very likely to end up in jail if you do"). What we (meaning Sam Harris and the people here defending his argument, though not necessarily everyone who makes the sometimes-ill-defined claim "there is no free will") mean when we say that the man doesn't have free will is simply that *whether or not* he does in fact end up being persuaded by moral reasoning/responding to incentives/etc. in any given situation is not something that *could have been otherwise*.
That is, at the "moment of choice," he isn't really making a Choice at a fundamental level, no matter how much it feels to him that he is, but rather his actions are following from some complicated combination of his genetics, the results of his upbringing, his knowledge of the potential consequences of his actions, his sense of morality, random fluctuations in the quantum fields that govern the particles that make up his brain and body, and many other things—none of which are ultimately authored by him.
So no, it does not follow that it's useless to try to teach little kids how to behave in preschool—these experiences are likely to influence what kinds of people they will be and the kinds of behaviors they will tend to display (just as training a puppy might reduce the likelihood that it will display aggressive behavior as an adult). And regarding your point about the gender transitioning prisoner—again, if a criminal is trying to game the system, they are just responding to incentives, and there is no contradiction with the nonexistence of free will.
I think you (and many others do this too) are taking the claim "there is no free will" to be an assertion of some combination of lack of responsibility, moral relativism, opposition to punishment, and lack of respect for human dignity. I would balk at these things too (as would Sam Harris, as is evident from his other work), but that is not what is happening here.
You're actually right to imply in your earlier comment that not much practically follows from this argument. Personally I mostly think of it as an interesting intellectual question, but as Christina the StoryGirl suggests, there may be more practical applications in the future. And for now, as she also says, it's a reason to temper one's own anger and hatred in response to heinous acts, which is probably good for the soul (however literally you interpret that phrase).
👆👆👆👆👆
All of this.
Thanks for this, seriously, I started to go into the same analogy about training puppies, but then got distracted by the sexier topic of violence.
But yes, this is a much more elegant and useful comment than anything I've written in this thread.
>> ""Sam Harris's argument that there's no meaningful difference between Jamelle swinging at you with a butcher knife and a grizzly bear charging you across your lawn"
> "Then I should be able to shoot Jamelle as I would shoot a bear. However, I think the courts would disagree with me.
I literally own and am licensed to carry a gun in order to shoot people charging at me with butcher knives or otherwise acting with clear deadly threat. I even carry insurance to defend me in both criminal and civil court should I ever be forced to injure or kill someone in self-defense, because the vast majority of American courts very correctly recognize that even gravely harming one's attacker in clear self-defense is legally justifiable (even if the legal process of proving that is difficult and very, very expensive).
Normally I would address all or most points in a comment, but if you don't believe you have a fundamental right and duty to defend yourself with deadly force against a deadly threat (and you don't believe that some violent criminals cannot be deterred by persuasive arguments for why they shouldn't crime), I think we have a deep cultural and philosophical divide between us which likely can't be bridged.
My mother was an expert shot with a pistol, and kept one in the house, Even when she was in her 80's I trusted her judgment and her aim. Asked her once where she would shoot an intruder, and she said in the legs. That's always seemed to me like it would suffice to disable and distract an assailant, but I have no data to go on. What do you think?
"I literally own and am licensed to carry a gun in order to shoot people charging at me with butcher knives or otherwise acting with clear deadly threat. "
Yes. And if you say "I shot that black guy because he was acting like a bear, not a human being", how far would you get?
"some violent criminals cannot be deterred by persuasive arguments for why they shouldn't crime"
Isn't that Harris' argument as you quote it? The perpetrators cannot be persuaded because they can't identify their underlying motives because they lack any capacity to do so, their motivations are set in stone by the deterministic universe, they cannot and could not have acted other than they did.
Show me where "persuasive arguments for why they shouldn't crime" fits in there.
On the contrary, I believe that even making every allowance for really shitty upbringing, the perps could have chosen otherwise, and that if you can manage to find some iota of humanity within them, you can try the persuasive arguments as to why crime is bad and wrong. But that's not Sam Harris, according to what you quoted.
Arguments should lead to conclusions, not feelings.
Yes, that one of the points I was trying to make with my comment.
You working the night shift?
Yes, why do you ask?
Oh, because I’d stayed up all night and it was 5 am or so where I am, and it was oddly nice to discover someone I knew awake, like another cricket chirping. Wondered what you were up to.
Yes, had you wanted to, you could have done something else. If all else prior to your action was equal, though - if you still intended to do the same thing, but for some inexplicable reason you did otherwise instead - that would be the opposite of free will.
You are physics, and physics is what links what you want to do to what you do. Physics is the thing that connects the cause of your intent to the effect of your actions; it is what means you have free will.
Imagine if it were otherwise - if you were trapped in your body, forced to watch it always /do otherwise/ in spite of what you want.
"You are physics, and physics is what links what you want to do to what you do. "
And yet:
"Romans 7:15-20
15 For I do not understand my own actions. For I do not do what I want, but I do the very thing I hate. 16 Now if I do what I do not want, I agree with the law, that it is good. 17 So now it is no longer I who do it, but sin that dwells within me. 18 For I know that nothing good dwells in me, that is, in my flesh. For I have the desire to do what is right, but not the ability to carry it out. 19 For I do not do the good I want, but the evil I do not want is what I keep on doing. 20 Now if I do what I do not want, it is no longer I who do it, but sin that dwells within me."
Or Poe's "The Imp of the Perverse":
"Induction, a posteriori, would have brought phrenology to admit, as an innate and primitive principle of human action, a paradoxical something, which we may call perverseness, for want of a more characteristic term. In the sense I intend, it is, in fact, a mobile without motive, a motive not motivirt. Through its promptings we act without comprehensible object; or, if this shall be understood as a contradiction in terms, we may so far modify the proposition as to say, that through its promptings we act, for the reason that we should not. In theory, no reason can be more unreasonable, but, in fact, there is none more strong. With certain minds, under certain conditions, it becomes absolutely irresistible. I am not more certain that I breathe, than that the assurance of the wrong or error of any action is often the one unconquerable force which impels us, and alone impels us to its prosecution. Nor will this overwhelming tendency to do wrong for the wrong’s sake, admit of analysis, or resolution into ulterior elements. It is a radical, a primitive impulse—elementary. It will be said, I am aware, that when we persist in acts because we feel we should not persist in them, our conduct is but a modification of that which ordinarily springs from the combativeness of phrenology. But a glance will show the fallacy of this idea. The phrenological combativeness has for its essence, the necessity of self-defence. It is our safeguard against injury. Its principle regards our well-being; and thus the desire to be well is excited simultaneously with its development. It follows, that the desire to be well must be excited simultaneously with any principle which shall be merely a modification of combativeness, but in the case of that something which I term perverseness, the desire to be well is not only not aroused, but a strongly antagonistical sentiment exists."
So there is a tension between the "I" which wishes to choose, and the physics which does. It's easy to see, on that basis, that the actions are what count and not the intentions, that the physical action is carried out by physics, and hence physics bears the rule and not the phantasmal "I" of "free will".
But then, the opposite query arises: what then is this tension between the 'will' and physics? If I am physics and physics is me, why is there disharmony between "what I want to do" and "what I do"?
Paul sounds very sophisticated here, very modern. I wonder if that's the neoplatonic influence or if it's All Paul.
> why is there disharmony between "what I want to do" and "what I do"?
My layman understanding is that you are made of many components, some of which sadly in this broken world are at odds with each other.
Our kindly host is perhaps rather better qualified than I to opine on executive dysfunction.
How would you even know if the people around you could cook rice or not? Where would you be in a position to measure that unless you were actually cooking rice together, which surely is not something that happens a lot, at least for men like us.
"Executive dysfunction does occur to a minor degree in all individuals on both short-term and long-term scales."
https://en.wikipedia.org/wiki/Executive_dysfunction
Free action involves rational judgment. A judgment is rational to the extent it is not physics (See Miracles by CS Lewis). Hence, I, being capable of free actions, am not just physics.
If it is just physics, there is no "me". Stones don't have "me".
Bit of a goalpost shift there!
> A judgment is rational to the extent it is not physics
Those are certainly all words.
You might enjoy https://www.lesswrong.com/posts/NEeW7eSXThPz7o4Ne/thou-art-physics and the posts it links to.
> If it is just physics
"just" is doing an awful lot of work there.
Try: https://www.lesswrong.com/posts/x4dG4GhpZH2hgz59x/joy-in-the-merely-real
"Thou Art Physics" just asserts materialism, it doesn't defend it. Which is fine, I don't think Big Yud wrote it in order to defend materialism, his audience has always been materialists. But if you don't agree that we are "physics" then it doesn't provide an argument to sway you.
The whole debate is over the fact that "physics" (as in, atoms and energy following the laws of physics) does not seem capable of producing what we experience (free will, reasoning, etc). You might disagree and think that physics is capable of producing those things, but it doesn't answer the question to assert "Well, you are physics so physics must be doing all those things you are experiencing."
Here's part of Lewis's argument from "Miracles", if you're interested you can read the rest of it here, it's the entirety of Chapter 3 (https://www.basicincome.com/bp/files/Miracles-C_S_Lewis.pdf):
"The easiest way of exhibiting this is to notice the two senses of the word ‘because’. We can say, ‘Grandfather is ill today ‘because’ he ate lobster yesterday.’ We can also say, ‘Grandfather must be ill today ‘because’ he hasn’t got up yet (and we know he is an invariably early riser when he is well).’ In the first sentence ‘because’ indicates the relation of Cause and Effect: The eating made him ill. In the second, it indicates the relation of what logicians call Ground and Consequent. The old man’s late rising is not the cause of his disorder but the reason why we believe him to be disordered. There is a similar difference between ‘He cried out ‘because’ it hurt him’ (Cause and Effect) and ‘It must have hurt him ‘because’ he cried out’ (Ground and Consequent). We are especially familiar with the Ground and Consequent because in mathematical reasoning: ‘A = C because, as we have already proved, they are both equal to B.’
"The one indicates a dynamic connection between events or ‘states of affairs’; the other, a logical relation between beliefs or assertions.
"Now a train of reasoning has no value as a means of finding truth unless each step in it is connected with what went before in the Ground-Consequent relation. If our B does not follow logically from our A, we think in vain. If what we think at the end of our reasoning is to be true, the correct answer to the question, ‘Why do you think this?’ must begin with the Ground-Consequent ‘because’.
"On the other hand, every event in Nature must be connected with previous events in the Cause and Effect relation. But our acts of thinking are events. Therefore the true answer to ‘Why do you think this?’ must begin with the Cause-Effect ‘because’.
"Unless our conclusion is the logical consequent from a ground it will be worthless and could be true only by a fluke. Unless it is the effect of a cause, it cannot occur at all. It looks therefore, as if, in order for a train of thought to have any value, these two systems of connection must apply simultaneously to the same series of mental acts.
"But unfortunately the two systems are wholly distinct. To be caused is not to be proved. Wishful thinkings, prejudices, and the delusions of madness, are all caused, but they are ungrounded. Indeed to be caused is so different from being proved that we behave in disputation as if they were mutually exclusive. The mere existence of causes for a belief is popularly treated as raising a presumption that it is groundless, and the most popular way of discrediting a person’s opinions is to explain them causally—‘You say that ‘because’ (Cause and Effect) you are a capitalist, or a hypochondriac, or a mere man, or only a woman’. The implication is that if causes fully account for a belief, then, since causes work inevitably, the belief would have had to arise whether it had grounds or not. We need not, it is felt, consider grounds for something which can be fully explained without them. "
Isn’t this just equivocation, though? Sure, the English word “cause” means more than one thing, but why should that fact prove anything about determinism?
“The implication is that if causes fully account for a belief, then, since causes work inevitably, the belief would have had to arise whether it had grounds or not” - that’s simply wrong on the face of it. A belief has grounds if there is a cause and effect chain linking it to the things the belief is about. e.g, if I profess “the cat is on the mat” after photons reflected from the cat hit my eyes, this is grounded in a way that my professing this after merely dreaming the cat is not.
The implication when people say things like the ones in Lewis’s examples is that the beliefs are not grounded because the cause and effect chain leading up to them are rooted in something other than the true state of the world that the belief is about; certainly not that things would somehow be better if no causal chain linking the map and the territory existed at all, that’s crazy talk! "You say this because you dreamt that cat" is an accusation that you are wrong beause the cause/effect chain grounds in something other than the actual state of the world being professed, not an accusation that you are wrong merely because a cause/effect chain exists!
Much as we may dump on Yud, he has a sequence of posts about this too:
cf. https://www.lesswrong.com/posts/6s3xABaXKPdFwA3FS/what-is-evidence
I agree that Yud does not provide a comprehensive defence in thou art physics. OP did not open this thread with philosophical rigor, however; opened with a complaint that they had difficulty /feeling/ compatibilism is true - that their intuition was that it is not enough: "this feeling is deeper than any feeling about conclusions of physics".
Indeed, you make a similar complaint with “...does not seem capable of producing what we experience”. As a description of how that intuition arises and what an alternative way might be like, “thou art physics” doesn’t do too bad a job. (It also has the advantages over, say, Dennett’s books of being available online and being a forum chat sized read someone completely fresh to it might plausibly actually go and read in this kind of setting instead of, y’know, a whole damn book).
At the end of the day, I am not a telepath and can't interact with your feelings directly. The best I can do is gesture to alternative ways of being and hope that at some point enough things click for the reader to get some kind of sense of what it is like to be in someone else's head, thereafter leaving them with enough alternatives to choose between that they can support their beliefs, whatever those ends up being, with more than just the feeling that they are trapped into believing things by their intuitions.
>A belief has grounds if there is a cause and effect chain linking it to the things the belief is about...if I profess “the cat is on the mat” after photons reflected from the cat hit my eyes, this is grounded in a way that my professing this after merely dreaming the cat is not.
Both seeing a cat and dreaming about a cat are caused in the "cause and effect" sense, or else they wouldn't happen. Only one of them is grounded however. That's Lewis's whole point: recognizing that seeing a cat which is really there and dreaming of a cat which isn't really there requires ground and consequent chains of reasoning, while cause and effect chains of causation do not have to be logically grounded at all. You can have cause and effect chains of causation that produce completely ungrounded beliefs, such as dreaming of a cat, a drunk man hallucinating. This works fine for non materialists: they can explain the ungrounded beliefs, like the dream and the hallucination, as being explained by physical chains of cause and effect, while explaining the grounded beliefs as being caused by chains of logic. Yet if those chains of logic are actually caused by the same chains of cause and effect that create ungrounded beliefs, then we can have no confidence that beliefs we arrive at due to logic are different than beliefs we arrive at due to chemistry. All beliefs are due to chemistry, the logic is just what it feels like when the chemistry is happening.
As Lewis puts it:
"Acts of thinking are no doubt events; but they are a very special sort of events. They are ‘about’ something other than themselves and can be true or false. Events in general are not ‘about’ anything and cannot be true or false. (To say ‘these events, or facts are false’ means of course that someone’s account of them is false). Hence acts of inference can, and must, be considered in two different lights. On the one hand they are subjective events, items in somebody’s psychological history. On the other hand, they are insights into, or knowings of, something other than themselves. What from the first point of view is the psychological transition from thought A to thought B, at some particular moment in some particular mind, is, from the thinker’s point of view a perception of an implication (if A, then B). When we are adopting the psychological point of view we may use the past tense. ‘B followed A in my thoughts.’ But when we assert the implication we always use the present—‘B follows from A’. If it ever ‘follows from’ in the logical sense, it does so always. And we cannot possibly reject the second point of view as a subjective illusion without discrediting all human knowledge. For we can know nothing, beyond our own sensations at the moment unless the act of inference is the real insight that it claims to be.
"But it can be this only on certain terms. An act of knowing must be determined, in a sense, solely by what is known; we must know it to be thus solely because it is thus. That is what knowing means. You may call this a Cause and Effect because, and call ‘being known’ a mode of causation if you like. But it is a unique mode. The act of knowing has no doubt various conditions, without which it could not occur: attention, and the states of will and health which this presupposes. But its positive character must be determined by the truth it knows. If it were totally explicable from other sources it would cease to be knowledge, just as (to use the sensory parallel) the ringing in my ears ceases to be what we mean by ‘hearing’ if it can be fully explained from causes other than a noise in the outer world— such as, say, the tinnitus produced by a bad cold. If what seems an act of knowledge is partially explicable from other sources, then the knowing (properly so called) in it is just what they leave over, just what demands, for its explanation, the thing known, as real hearing is what is left after you have discounted the tinnitus. Any thing which professes to explain our reasoning fully without introducing an act of knowing thus solely determined by what is known, is really a theory that there is no reasoning.
"But this, as it seems to me, is what Naturalism is bound to do. It offers what professes to be a full account of our mental behaviour; but this account, on inspection, leaves no room for the acts of knowing or insight on which the whole value of our thinking, as a means to truth, depends."
All feelings are interpreted. You could interpret your feeling as "figuring out which one of the seemingly possible futures is the real one".
As an analogy, imagine that you are preparing for a sport competition. You could win, you could lose, you could end up in any place... there are many possible outcomes. And the real outcome depends on how hard you try. And yet there are external factors. You cannot "just choose freely" to get the gold. You can only choose to try hard, but whether that gets you the gold, depends on many things, like the capabilities of your body, what the other competitors do, the weather, etc.
You could apply a similar perspective to willpower. You can (and perhaps should) try, but the result depends on whether some parts of your brain betray you, and other circumstances.
And maybe you can go further and apply that perspective to everything. (Not sure, didn't try.)
The feeling is about the past--that I could have chosen differently.
Why love? Love still feels amazing. People say that they were destined for each other, but that doesn't trivialize their relationship in any way.
Another example of how intelligence alone isn’t enough for individual survival. Two people with survival training were put down in the Amazon Rain Forest, and they would’ve starved to death if they hadn’t been retrieved in three weeks (evidently this is some sort of bizarre TV series?).
A while back someone asked how long would it take 1,000 people to rebuild civilization if they were on their own. I suspect they’d just die. Or maybe it was 10,000. I suspect they wouldn’t do much better unless they already had tools and seed stock.
https://open.substack.com/pub/braddelong/p/does-each-of-us-have-a-big-enough?r=7xjun&utm_medium=ios
You make the mistake of thinking 1000 people are just 1000 individual people, but cooperation is what allowed humans to avoid death and become apex predators. There is a qualitative difference between a group and an individual.
The other key ingredient, beyond cooperation, is culture, where strategies for survival and resource extraction are passed down from generation to generation. But a 1,000 people taken at random from Silicon Valley tech companies? Unless they all went through training in how to extract resources from the wild, I wouldn't give odds on them being very successful.
Although it was a much smaller group, I'm thinking of the Donner expedition. They were well-equipped, but they couldn't deal with getting snowed in. Meanwhile, the indigenous Washoe people could survive winter in the Sierras because their culture gave them the skillsets they needed to survive.
Absolutely agree that 1000 random people would die pretty fast, but to be fair in the original thought experiment you get to hand select people with specific skills and personalities.
I thought I was pretty clear. I added some emphasis to my OP to mitigate confusion...
> Another example of how intelligence alone isn’t enough for *INDIVIDUAL* survival. Two people with survival training were put down in the Amazon Rain Forest, and they would’ve starved to death if they hadn’t been retrieved in three weeks...
Even trained survivalists would have trouble on the own over the long-term.
> A while back someone asked how long would it take 1,000 people to rebuild civilization if they were on their own. *I suspect they’d just die.* Or maybe it was 10,000. *I suspect they wouldn’t do much better unless they already had tools and seed stock.*
But the "we're smart enough to make tools from scratch" crowd doesn't seem to buy my thesis. ;-)
I know somebody who would do fine in that situation, because they have been in that kind of situation many times before. The individual in question has commented that the survival techniques are all illegal now; hunting and fishing techniques that are considered "unsporting" (they're too effective).
But they're also in their 60s, and learned in their own youth from old men who had also done that kind of thing; I don't know anybody younger than that with those kinds of skills.
The issue, quite simply, is technological; it's not that the technologies are lost, exactly, so much as that knowledge of them is very thinly distributed. Do you know how to pick up and position a 2,000 pound rock into an elevated location using nothing but materials you'd find in a forest? I didn't until I watched a video of a guy doing it and went "Oh, that's incredibly clever."
Take 1,000 average city-dwellers, and yeah, they'd probably die. Take 1,000 average rural people and they stand a better chance (mostly leaning on the elderly's memories, whose grandparents taught them the survival skills you'd need).
Take 50 very carefully selected people and they could rebuild civilization, though.
Yes. If we trained a 1,000 youngsters in a wide variety of survival skills, with some specializing in specific skills (such fiber extraction, thread and rope-making, and net weaving, creating medicinals from native plants, etc.) and gave them a variety of steel tools, including knives, hatchets, fishing hooks, awls, adzes, and the such) And then, as young adults, they left them on a parallel Earth (with the fauna of the Pleistocene) in a resource-rich area of a temperate zone (large marshes would be ideal), I'm sure they'd survive to reproduce and then increase their numbers. The genetic bottleneck issue might come back to haunt them down the line, though.
OTOH, even if they knew how to knap flint and chert, and how to tan and work leather, and we dropped them naked on the parallel Earth without tools or clothing, I'd lower the odds of them flourishing and populating the parallel Earth.
Those who received a supply of tools would likely have functional villages with permanent dwellings built within a year. The ones without the tools would have to work longer and harder to reach that village state. They could probably do it with some die-off the first winter. The question is, would the initial die-off be large enough to no longer have a sustainable breeding population?
Either way, civilization would be long in the future, though. Domestication of grains and herd animals would have to start over. And they would need some mechanism of intergenerational memory to know that such things were even possible.
But if the settlers of this parallel Earth received seed stock and a variety of domesticated animals, civilization would happen quicker—if they could keep their crops growing and herd animals reproducing during the initial years.
You can skip straight to iron, you don't need to start in the stone age; you just need wood, sand, iron ore (if it's dirt and it's red it will probably work, you're), and, ideally, a good bend in a waterway. Also two or three days, because you need to make charcoal as an intermediate step, because you need some carbon to add to your ore.
If you can find a source of wax you can cast some fairly complex stuff, but you can do simple stuff like a primitive iron hammer with just sand, and then iterate from there.
Hmmm. How many people could do this without (a) the knowledge (b) and practice? And without the tools on the list in this link, it's going to be hella harder.
Put people down in the wild, without tools, most of them are going to die before someone has the free time to smelt iron. And the memory of such things would likely be lost within a generation. I don't see why this is in the least bit controversial. ;-)
https://www.thecrucible.org/how-to-smelt-iron/
Hence, fifty well-chosen people could rebuild civilization, but randomly choosing 1,000 people wouldn't necessarily end as well.
I could get iron smelting up and running on my own, personally, although I'd expect a couple of weeks to get it running properly (I have some spare weight, so starvation isn't an immediate issue). Need to hand-pan some black sand (which is mostly hematite and/or magnetite) out of your dirt, use that to cast an iron pan (cast a disk and hammer it into shape with a rock while it cools) to speed up panning and also provide a way to boil water, then you're going to want to cast a primitive hand axe, which would allow the construction of a wooden sluice, which automates most of the panning work.
Typical earth contains 5-10% black sand by weight; if your dirt is red the concentration is probably higher.
If you're lucky enough to have some suitable rocks around you to knap into blades, you may be able to skip the blade and maybe even the axe, but personally I'd just go straight for the iron, because I actually know how to do that.
The method described in that article is basically right, no surprise, but too complicated. The furnace is the issue, and can be solved by building a variation on a Dakota Fire Hole, which is what we want the bend in a waterway for (also we can then set up our sluice in the waterway itself) - the high bank is an ideal place to dig out a primitive furnace, because the curved embankment will help capture airflow for our intake. We don't have the materials to build a bellows, after all, and our fuel is going to be low-quality (we're going to be relying on fallen branches and other wooden debris for a while, properly cured wood takes time).
You don't want to use the fire hole for cooking, mind - we want our fires to be inefficient, because they're going to be where we are sourcing our charcoal until we can get a proper operation going, and the fire hole is too good at what it does.
That would make a great idea for a reality TV show. Put down several teams of two or three people in there wilds — in places where iron ore is available. Let them have food and water, so they wouldn't have to worry about hunting and gathering. But give them *no tools.* Whichever team is able to create the first iron hatchet from natural resources wins a big prize. We could call the series "The Iron Age," or something like that. The teams will never have worked with iron before, but they'll each receive a booklet outlining the basic steps to follow.
> evidently this is some sort of bizarre TV series?
It's clear you haven't watched it, or else you wouldn't be trying to draw conclusions from it
The premise of the show is basically: Take a woman with survival skills and a "macho macho" man who -thinks- he has survival skills and drop them somewhere, comedy ensues
Well, you've confirmed my assumptions that it's a bizarre premise. According to the link I posted, it appears that these two individuals wouldn't have survived much longer if they hadn't been removed from the situation. I'm not sure of the comedic value.
BTW, I don't own a television, and I haven't watched network TV since circa 1985, so I pick up most TV-related knowledge secondhand.
There is another show called Naked And Afraid where they drop True survival experts into wilderness situations and failure is COMMON. (They are not all experts but some of them are real professionals who come to grief)
Hey, I'm the same as you about TV, and even stopped about the same time. I had an el cheapo black and white one in grad school, and often ended the day by smoking weed and watching Letterman, but I don't recall watching anything else on it, and that's the last era when I was a TV owner. I never decided against being a TV owner, just kind of drifted away from it, and now the sound of TV is like a cheese grater on my nerves -- especially the over-bright plasticky voices in ads and comedies, and the pseudo-neutral drone of the news anchors. Did you drift away too, or decide one day to throw the fucker in the trash?
I shot my TV.
After classes, I would get stoned and get sucked into game shows and dumb sitcoms. I noticed that TV was big time waster. Also, I noticed that at parties, when someone turned on the TV, it would kill all conversation. Everyone would get hypnotised by the TV.
Back then, there were bumper stickers that read, "Shoot your television!" I mentioned to some friends that that sounded like a good idea. One of my friends thought it would be fun to turn my big old Sylvania Color TV into a terrarium for his iguana. So, I borrowed my mom's Ruger .357 Magnum, and with the help of my friend, I carted my TV down to the local gravel pit and shot it. What a mess! The cathode ray tube imploded, and its glass shattered into fine pieces. The glass shards were like fine dust. It was useless as a terrarium. And I probably dumped a bunch of toxic materials into the environment (a stupid move on my part!).
But in the hours when I could have been bettering myself by watching Vanna spin the wheel and Alex do his thing, I read thousands and thousands of pages of novels, history, philosophy, and science. What a waste! <snarkasm>
My theory is that humans are naturally selected to be susceptible to TV. We spent hundreds of thousands of years safe around campfires, watching the flames dance while someone sang or told us stories. Being hypnotised by campfires kept us safe and close at night. And TVs fill that behavioral spot in civilized humans.
Well done, beowulf888
That was a really fun story to read, thank you! You sound like somebody who would be fun to drink beers with.
Just trying to say that it's trash tv and we can't really draw any conclusions from it
> BTW, I don't own a television
Good, me neither! Got rid of mine 15+ years ago
Alas, my girlfriend does, so I get exposed to it a bit more directly
Many, many professional wilderness guides washed out of shows like this. There's more than one.
Unfortunately, I can get sucked into streaming video. Luckily, I have a low tolerance for stupidity in shows. But every so often, I get sucked into a good series, and I end up binge-watching them on my laptop. The latest season of Slow Horses just started up. And the next season of The Diplomat is due for release in couple of weeks. Sigh. I can't completely escape my addiction.
The show Alone is a good example of how—with minimal tools to start—survival without civilization as we know it is extremely difficult.
Alone is obviously people by themselves, but they’re the best trained people the show can find for such circumstances (with 10 tools of their choice) and they barely make it 100 days. I really doubt a larger group would fare differently.
Safe food, water, and shelter are just incredibly difficult and labor intensive to procure. Tools, supply-chains, and modern manufacturing really are basically magic.
One thing that makes Alone especially hard is that hey start them off at the end of autumn or the beginning of winter. I get the sense that quite a few of the more able contestants could go indefinitely if they had the whole spring and summer to stock pile food. They're also limited to staying in one spot and can't move to find better fishing or game spots. The arctic region is also more hostile than most places on earth would have been before civilisation.
They're also forced to stay in the camp / area they've been assigned by the producers, and of course they have to follow the law with regard to hunting (they can't kill animals out of season, etc).
Yeah, that's definitely loading the dice. No stockpile of food, during the lean season, starting off from zero - even experienced people who know the environment will struggle, and dump someone there who has never lived in that place so doesn't know the cycles of weather, foraging food, etc.? Setting them up to fail.
I thought you were talking about "Alive", which... yeah same conclusion.
I still think you’re underestimating the difficulty in the details…
Some examples:
How many people would be able to orchestrate running a herd of buffalo off a cliff? Or “run a deer to death” (which I’ve never heard of and as a hunter myself seems virtually impossible)?
Who’s going to build an extensive smoker system for a buffalo herd’s worth of meat and get everything smoked before the meat rots in the heat? How will the predators/scavengers be successfully kept at bay from the meat? How much will eating near rock-hard smoked meat for months nourish the population?
These are skills that our ancestor populations gained over centuries/millenia that are virtually gone now and can’t just be picked back up like riding a bike.
Humans can indeed outrun certain species of deer, by chasing them until they collapse of exhaustion. This is a venerable method of hunting on the African savannah. It would not work in a forested area.
I have hiked 40 miles in a day when I was younger. Admittedly, doing it again several days in a row is harder than doing it once, but I would expect typical humans in reasonable physical condition to be able to do it. Humans are really good at persistence hunting.
(Arguably, hiking Yosemite Upper Fall, or the Grand Canyon down to the river and back, is harder work than 40 miles on the flat, because of the change in elevation)
This demonstrates a very weak proposition. Yes, the Amazon is nasty place to be, and it's hard to build civilization there. Now, if they'd conducted this experiment in something like, say, the Nile Delta, I expect they'd have fared better (and made for a rather different TV series).
The Nile delta of today or the Nile delta of pre-civilization times? I immediately flashed on malaria, crocodiles, and water-born parasites. Oh my.
I suspect most of the world is relatively inhospitable to humans alone without the necessary tools and culturally transmitted survival knowledge. Arctic? Nope. Boreal forests? Less deadly than the Arctic, but there's only a short season to build shelter for the winter. Deserts? Nope. African savannahs? Nope (but ironic because this is where humans presumably evolved). The same goes for prairies (which were largely avoided by indigenous tribes before the horse was introduced to North America). Mediterranean climates? Possibly. Temperate woodlands? Possibly, but you've still got winter to deal with.
As for the Amazon basin, it used to be the center of a massive agricultural civilization with roads and cities, probably supporting millions of people until the Spanish and Portuguese arrived. The rainforest seems to be a relatively new thing.
> A while back someone asked how long would it take 1,000 people to rebuild civilization if they were on their own. I suspect they’d just die. Or maybe it was 10,000. I suspect they wouldn’t do much better unless they already had tools and seed stock.
What do you think the minimum number required would be to rebuild civilization without tools? Or do you think the no amount is sufficient, it’s about how many generations they have to rebuild?
The only semi-successful example I can find is the mutineers of the HMS Bounty and they started out pretty well provisioned with tools.
Hi Liam. Have been wondering how your family is, kwim? If you have a chance and are so inclined, let me know -- but I understand what busy is, so no problem if you don't.
Survival and civilization are two different questions. And what level of civilization are you aiming for?
Without any modern steel tools, If the initial group of 1,000 or 10,000 had people trained in fiber extraction and twine and rope manufacture, flint-knapping, trapping techniques, hunting (including skinning and field dressing a carcass), leather tanning and leather working, basic midwifery, and probably a bunch of other "primitive" skills I haven't thought of, a group of 10,000 people who kept in contact with each other could probably pull themselves up to a Mesolithic level of resource extraction and survival—but with a significant chance of going extinct some point. Even if they were to keep their birth rate high enough not to go extinct in a few generations, they'd face a bunch of genetic issues due to inbreeding (I wonder if the Sentinelese people have enough genetic diversity to survive for the long-term). If the initial group were only 1,000 people, even with the necessary skill sets, I think there would be high chance that they'd face extinction within a few generations.
A civilization of even Bronze-Age level sophistication is out of the question for tens of thousands of years unless the founder population had access to durable books that described the technologies accumulated over the past ten thousand years. And the founder population would have to create some strong cultural institutions to keep at least some of their descendants literate. Otherwise, we're talking tens of thousands of years or hundreds of thousands of years before humans reinvent civilization.
Here's a fun timeline of the evolution of human tools. It took a long time for humans to figure out all this stuff. I don't think it would go any quicker without some repository of knowledge.
https://www.historicaltechtree.com/
Yeah. SF stories of time travellers going back to, say, Ancient Rome and speedrunning civilisation with their superior technical knowledge are great fun, but not realistic.
It doesn't matter how smart you are and how much theoretical knowledge of metallurgy etc. you have. Great, you have 160 IQ and the history of the Industrial Revolution memorised. Now it's you, your bare hands, and a rock in a forest. Good luck re-creating the 19th century level of society in a month!
I came at a similar realization after a handful of camping trips. I'm an average camper/hiker, so nowhere near where these two contestants were at in terms of skills and physical stats, but it quickly became apparent to me that the only things standing between me and sure death were: thin synthetic textiles and close proximity to a climate controlled, self-propelled shelter. Ie. my weekend backpacking trip is merely living on borrowed time thanks to a set of extremely controlled parameters related to weather and distance to very solid shelter.
Actually met a hiker in need a month back in the Adirondacks, which served as a real life warning how a small mistake can quickly escalate to risk of life, where she got caught in a storm that somehow wet her gear, which deprived her of sleep and warmth, which translated to minor scuffs and eventually falls. She made out A-OK with some help, but I could easily a person spiral into making risky decisions, spraining an ankle, getting lost/falling into a ravine and that being the end of the line. Very educational.
Have you ever read “to build a fire” by Jack London?
I did just now while waiting on my delayed plane. It definitely brings the point home, and I admit I enjoy London's style. Thanks for sharing
You’re welcome.
I had the same experience. Backpacking in hot weather, making sure each day took me past a good water source. First night, arrived at camping spot to find a tiny stream of duff-filled water coming down the hill at slightly more than a drip. Spend a couple hours collecting, filtering and sanitizing about 2 quarts of cloudy water, went to sleep worrying what I'd drunk would make me sick. It didn't, but I bailed the next day. 2 quarts was barely enough to rehydrate me from the day, and what if the water at the next place simply was not there?
I am glad you came out OK!
Funny you mentioned that because on that same hike I did come across a guy that was coming down the mountain like a storm because he was trying to increase his body temp after having nearly succumbed to hypothermia.
"Nets aren't that hard to make either, given decent fibers (cotton, hemp etc)."
First, plant your cotton...
*IF* you are in a fertile area and a temperate to tropical climate with fast growing season and plenty of game, fish, etc. around to hunt, where there isn't the likelihood of freezing at night or needing more than basic shelter, then sure - sharpen your stick, dig in the soft, humus-rich earth, plant your seeds and fish/hunt plus forage for wild plants while you wait for the crops to grow.
And hope you don't succumb to illness, starvation, or natural disasters in the mean time.
Here is where I invoke the Famine. 19th century Ireland, part of the British Empire, all kinds of crops grown - and yet the poorest died of starvation. Questions such as "why didn't people fish, we're an island surrounded by the sea and with plenty of rivers?" are often asked, and there are reasons for why "no, they couldn't just live on fish alone".
https://shows.acast.com/irishhistory/episodes/why-didnt-irish-people-eat-fish-during-the-great-hunger
> Tools aren't that hard to make, depending on what you need. a pointed stick will let you plant seeds, for example. Nets aren't that hard to make either, given decent fibers (cotton, hemp etc).
Hypothetically, if I put you down somewhere random where there were plants that have been useful for fiber extraction (with no phone app to identify them and no Youtube videos to show you how to extract the fibers), how long do you think it would take to figure out which plants have suitable fibers and the best way to extract their fibers?
Next, you'll be given the task of spinning those fibers into twine or thread. Twisting the fibers together by hand, will probably not work. Back in the Mesolithic, they would bore a hole in an antler and twist the fibers through the hole to make rope or a thick twine. We know this from analyzing the wear marks on antlers with holes in them that archaeologists used to think were ritual batons. They may have used flint awls to drill the holes. Or possibly simple bow drills, but you'd still face the problem of bootstrapping your twine production and shaping the drill wood.
I took a basic flint-knapping course way back when I was an undergrad in Anthropology. I would have flunked out of the Mesolithic, and my Paleolithic ancestors would have hooted at my attempt at an Acheulean hand ax. And even if flint were locally available, how skilled are you at identifying it? Flint can look like any other river cobble until you crack it open with a rock hammer (which you don't have in this hypothetical scenario). But say you solve the twine production problem without starving to death first, do you feel you could weave together a fish net that would be useful? Even looking at one as a model, I don't think I could.
And starting a fire without matches? You need sturdy twine and an axe to create a bow drill and a hearth stick. I couldn't bootstrap all the necessary materials to do this. I need my trusty survival hatchet, a hefty survival knife, some strong twine, and preferably a saw, and infinite patience on a dry day. ;-)
People underrate the skillfulness of "primitive" people.
https://youtu.be/_tpBCflcekU
Modern cotton is the result of human breeding. The original, which is still available, isn't a particularly useful fiber source, but there are theories (i.e., just so stories) about why it was chosen as a future fiber of choice for the Incas.
I grew up in NE US, and I couldn't identify the local fiber sources that the Indians used without a manual (or in my case ChatGPT). They are: dogbane, milkweed, basswood, nettles, and slippery elm. Milkweed and nettles I could ID, but I wouldn't know how to extract the fibers from them. The other three, I wouldn't know how to ID them.
But the point was, even people with the skills have trouble surviving alone. Brad DeLong's conclusion was: "it was and is not the ability to think up clever solutions to problems on the fly. Instead, it was pooled memory and anthology thinking-power, plus the division of labor that allows us to carve tools that contain the results of that collective thinking-power." With my editorial note being: i.e., cultural practices and memory that have learned to exploit specific environments.
AI 2027 contends that AGI companies will keep their most advanced models internal when they're close to ASI. The reasoning is frontier models are expensive to run, so why waste the GPU time on inference when it could be used for training.
I notice I am confused. Couldn't they use the big frontier model to train a small model that's SOTA for released models that could be even less resource intensive than their currently released model? They call this "distillation" in this post: https://blog.ai-futures.org/p/making-sense-of-openais-models
As in, if "GPT-8" is the potential ASI, then use it to train GPT-7-mini to be nearly as good as it but using less inference compute than real GPT-7, then release that as GPT-8? Or will the time crunch be so serious at that point that you don't even want to take the time to do even that?
They're already doing this. o4 was never released, but distillations (o4-mini etc) were.
I think that's just a variation of "keep the true SOTA behind closed doors " by adding "and wheel out the mini version for profit reasons"
Thomas Lee has written an in-depth critical review of Yudkowski and Soare's "If Anyone Builds It, Everyone Dies." But I received the link via email, so I do not know how to link to the substack from here, so if someone else could provide that, I would appreciate it.
Do you mean this?: https://www.understandingai.org/p/the-case-for-ai-doom-isnt-very-convincing
Yeah, someone else already linked to it.
Could the Germans have won WWII had they not attacked the USSR?
We now know that Stalin was planning to attack Germany in 1942 or 43, and that the Ribbentrop-Molotov Pact was only meant to give him time to prepare for that. If the Germans had spent that same time building up defenses on their eastern border, maybe it would have kept the Red Army out, or proven so formidable that Stalin would have reconsidered his invasion.
The Soviets fought harder and were more devoted to victory because the Germans were the aggressors against them and committed many atrocities against Soviet civilians in captured areas. Without that, if the Soviets knew they were the ones fighting a war of aggression on foreign (German and Polish) soil, their morale would have been lower, which would have translated into worse battlefield performance and a greater willingness to end the invasion if it proved harder than expected.
Something that the other commenters haven't touched on: trade between the two parties was very lopsided in the favor of the USSR, long term. Germany needed a lot of raw materials, especially grain and oil. The Soviets demanded machine tools and working examples of German industrial machinery in exchange. So the Soviets were building up their industrial capacity and knowledge, which would only make them even more of a military threat as time went on. Meanwhile the Germans needed Soviet supplies just to keep functioning at the same level.
I reviewed Wages of Destruction for the book review last year which I think is the best book on those grand-strategic questions of the war.
https://claycubeomnibus.substack.com/p/book-review-wages-of-destruction
WoD argues that the main bottle neck for the German military industrial system was arable land to grow food and access to both of which would have been solved if Germany had managed to capture the south western part of the USSR. Without capturing those regions Germany would have been dependant on trade with the USSR to not likely succumb to either food or fuel shortages. Which is a big risk to place on an unreliable trading partner.
In the grand scheme of things though, in a three way fight between the USSR, Germany and the Western powers, having the Germans and soviets expend themselves fighting each other is the ideal outcome for the Western powers, who win the war with relatively little fighting, and a major game-theoretic loss for the Germany and the USSR who both take extreme losses that mostly cancel each other out benefitting neither. So there would have been a huge mutual benefit for Germany and the USSR if they had been able to corporate longer than they actually manged through Molotov-Ribbentrop.
Realistically though either one placing trust in the other would have been an extreme risk since they were so ideologically at odds, and making a pre-emptive first strike at an opportune moment was probably the more cautious move for Germany to make counterintuitively.
It's funny, I'm a huge fan of the book (have reread it a few times even) and my major takeaway was that the combination of Nazi ideology, and the economic consequences of enacting that ideology, made some kind of catastrophic conflict more or less inevitable for Germany. It also neatly explained the seeming paradox of how you could conquer the most industrialized and richest corner of the world and still have to manufacture all your own stuff at great expense.
As an aside, the most chilling part of the book for me was the explanation of paradoxical tensions of their extermination camps and slave labour economy, leading to an insane situation where company accountants were bidding for slave labour and then trying to weasel just enough food out of the government (which wanted to starve them to death asap) to extract useful work out of them before they could expire. Generally the book was very good at making me understand how you could logically, rationally proceed from an insane premise and end up in the position of writing on behalf of Bayer to ask for your allocation of death camp slaves to be increased this quarter. Which is a pretty good intro to macroeconomics in general, actually.
Define "Germans" and "won". The Nazis conducted an unsustainable arms buildup specifically to enact a war against the East for lebensraum, with the ultimate idea of conducting a sustained war of racial survival against the "seat of world Jewry" - the United States. That's literally the plan here, as laid out by Hitler. And they acted accordingly, to the extent that by the late 1930s their economy was overheating and they were facing a currency exchange and exchange of trade crisis. Which they then solved by economically stripmining western Europe and whatever bits of they East they got ahold of.
So long as you have the Nazis running Germany, you have the buildup of forces and the need to put them to use to 'solve' the economic problems they cause. Their ideology gives them a plan and direction (conquer Europe, empty and colonise the East) and all these forces then lock them into a total war against the three largest powers on the planet (the US, USSR and British empire).
Only in a completely ahistorical scenario, one with no Nazis and no Hitler, do you end up with a 'normal' reactionary authoritarian in charge. And then the most likely outcome is a more limited war to 'retake' various bits of Europe that probably results in a drawn-out campaign against France and/or Poland (no massive, unsustainable switch to a wartime economy in the 30s means a smaller, less well equipped German army). At which point the British inevitably get involved and Germany ultimately loses or settles and keeps some of its gains. That's about the best outcome the Germans could realistically have expected.
If the US stays out, maybe. Otherwise the Soviet Union would have attacked at a time of its choosing with US support.
Also: Without any possibility of grabbing the oil fields of Baku, the German war plans don’t make much sense.
"Winning" is not well defined for the Germans in this case. They can't invade and totally subjugate their enemies the way we did to them, but they could perhaps reach a negotiated peace that lets them keep the things they really wanted.
Winning is fairly easy to define. Winning means that Germany gets control of whatever resources they can profitably extract, especially oil fields and fertile farmland in Western Russia, and that Russia makes no effort to build an army with which to attack Germany. For the latter, you don't need a German soldier on every crossroad, just a handful of officials to oversee the remaining Russian diplomacy, military, and economy.
more of a matter of just how quickly Nazi Germany would have surrendered after German cities start getting nuked
I don't know if it's true that morale would have been lower in a "war of aggression." That smacks of being a modern myth.
I know little about history, so this is totally an uninformed guess:
It seems to me that the main mistake Germans made was trying to conquer too much. However, not attacking USSR was probably not an option, because in that case USSR would attack them later. The correct move would probably be to avoid making an enemy out of UK.
What I would do, in Hitler's place: Make it a part of my ideology that Britain is also a superior race (perhaps second only after Germans), and emphasize the similarities of my plans with British imperialism (basically that Germany is going to be Britain 2.0). And of course, not attack Britain. They wanted to avoid the war anyway, now they would have no reason to join it. I might even offer some part of France to them (as a compensation for some kind of conflict they had with France in the past). Later, when Japan attacked USA, I would throw Japan under the bus. After taking Poland and France, I would make it clear to countries in Western Europe that they are safe, and fully focus on the eastern front. I would accept Ukraine as an ally against the rest of the Soviet Union. I might offer territories near Leningrad to Finland, if they help me conquer the city.
But of course, someone thinking this carefully would probably not have started WW2.
> What I would do, in Hitler's place: Make it a part of my ideology that Britain is also a superior race (perhaps second only after Germans), and emphasize the similarities of my plans with British imperialism (basically that Germany is going to be Britain 2.0). And of course, not attack Britain.
He did believe that the British were superior and Germanic, admired the British, and talked up the British Empire.
He didn’t want war with Britain, but we declared war after he invaded Poland.
>Could the Germans have won WWII had they not attacked the USSR?
Very probably not. By mid-1941, Britain is firmly committed to the war and the US is actively supporting Britain with money and supplies. The US Navy started shooting German U-Boats on sight in September, and full American entry into the war was very likely to happen even without Pearl Harbor. Germany had already tried and failed to bomb Britain into submission. Invasion was pretty much impossible due to Britain being surrounded by water, having a much bigger navy, and Germany having no landing craft and very few seaworthy transport ships: the Operation Sealion warplans proposed using Rhine river barges towed by destroyers as invasion transports. Starving Britain into submission by sinking merchant ships was theoretically possible but still a pretty long shot.
Germany also had the major handicap of having very limited access to food and petroleum products: the US was the largest exporter of these, and the British blockade cut off most other sources. Staying at peace with the Soviets would have helped Germany somewhat, since Germany was buying Soviet food and oil exports, but I get the impression that it wouldn't have been enough.
>We now know that Stalin was planning to attack Germany in 1942 or 43
Do we? My understanding is that most mainstrean historians have firmly rejected theories about Stalin having short-to-medium term plans to invade Germany in late 1941, especially after having the opportunity to examine Soviet archives for evidence in the 1990s. I've heard of some "maybe in 1942 or 1943, but definitely not in 1941" remarks, but I had parsed those as more speculation than being based on any hard evidence of Stalin pursuing plans of invasion.
Britain could have defeated the Axis all by herself, although it would have been a longer and bloodier affair. Assuming Germany didn't build a nuclear weapon first, and it seems like Germany's nuclear program was moving at a snail's pace.
I think if it were just Britain and its colonies and dominions vs Germany and the other European axis powers, then it's probably a stalemate until one side can't sustain a war economy any longer or someone gets nukes. I don't think Britain has much prospect of successfully invading mainland Europe without American or Soviet assistance and Germany has effectively zero prospect of invading Britain.
Reasonable analysis. I'm glad the United States entered the war, but I think it's fine that we entered the war slowly, by gradually putting more pressure on the Axis until the Axis snapped and attacked first. I absolutely don't think we were obligated to join the war in 1939. Would joining earlier have saved some lives? Sure, and it would have ended other lives prematurely. There's no guarantee it would have been a net benefit for America or humanity. (What if FDR forced us into the war in 1939, and the American people were so aggrieved at this that isolationists seized power and dragged us back out of the war before the job was finished?)
I think you're correct. The Soviet plan to attack Germany I believe was little more than a contingency plan wherein the General Staff draws up a strategy to attack just about everybody for use *in the event* that a sudden conflict arose with one of those countries, just so that nobody had to come up with one on the fly when minutes count, like in 1914.. I don't think there was a plan being operationalized by the Soviets, or even close. I could be wrong.
On the other hand if Germany didn't attack the USSR and started losing anyway, I can still imagine the USSR joining into the end of the war anyway because Eastern Europe would, in the face of a German collapse, be free real estate.
Yep, that would have been very likely. Apart from the free real estate like in Manchuria, Kuril Islands, there were plenty of small nations e.g. in South America that declared war on Germany shortly before the end of the war. They were militarily useless and had no intention of sending any soldiers to fight, but it still qualified them for US military help to build their own militaries. Also it qualified them to be founding members of the UN. So yes, plenty of incentives to pick the winning side.
"We now know that Stalin was planning to attack Germany in 1942 or 43" Do you have a reference for that?
If true, that merely confirms what Hitler suspected, and provided the rationale for attacking the USSR when he did. Unfortunately, I don't see a way out for Germany. I don't think that the morale difference between defenders and attackers is as strong an effect as you are implying.
France and Russia were allies, so an attack on one was going to end up a war with the other (because they both know that once their ally is gone, they are next). Hitler believed that he could not attack Russia without being attacked by France on the other front, and France was the weaker party, so he attacked France first.
But France and Great Britain were also allies, and for the same reason. So Hitler believed that he couldn't attack France without ending up in a war with GB, and he was probably correct. Thus, The Battle of Britain.
But you can't go to war with GB without involving the United States. GB was simply too valuable to the US economy for reasons of trade and debt, so the US isn't going to let GB fall.
So the interlocking chain of dominoes surrounding Germany is just too tight. And we all know what happened when they tried to take them all on at once, so...
An expansionist Germany in the 1940's is likely doomed.
> But you can't go to war with GB without involving the United States. GB was simply too valuable to the US economy for reasons of trade and debt, so the US isn't going to let GB fall.
That’s quite the rewriting of history.
Which is common in America - this idea that the US was going to go to war with Germany eventually, with or without pearl harbour. There’s little evidence of that. In fact Germany had to declare war on the US.
I would argue that our cultural and linguistic ties are the reason why America would not ever permit Britain to be overrun. The reason the United States didn't get involved earlier was possibly because American analysts determined that Britain was not facing an immediate, existential threat. Germany's naval power was far too modest to effect an invasions. Germany's air bombing was tragic and disruptive, but again, far short of what would be required to show an existential, immediate threat.
Britain was nearly lost. By the invasion of the USSR most newspapers in the us were expecting a German victory in Europe. And yet sentiment stayed isolationist. This Anglo Saxon alliance was often in churchills head
Nearly lost? Flabbergasting. They possessed half the world. There were millions of colonials ready and willing to fight. The Germans were desperately short of supplies in late 1940, and they were using a lot captured weapons. They didn't have the manpower to protect the land they had seized. Their Navy was a joke compared to the British Navy. “Nearly lost” seems so against all the evidence as I see it, but of course there are historians who believe this. World War II historiography is all over the place.
It was Churchill’s job to win the war as fast as possible, and that required him to tailor his language to a sense of urgency, and to inspire urgency in others. Including Americans.
I would also.not put much stock in what the newspapers were saying in 1940, because for moral, political and economic reasons, much of the upper class in this country absolutely wanted to get us into the war, including newspaper owners. Isolationism was much more of a grassroots thing. My personal take: it would have been fine for the United States to join the war a year earlier. But it wasn't 100% essential for the survival of Britain.
> Nearly lost? Flabbergasting. They possessed half the world. There were millions of colonials ready and willing to fight
This is massively retrospective thinking and even if true in retrospect it wasn’t clear at the time. We - for I am British - had lost the war in the continent by 1940 and there was no D Day possible unless America joined, which it wouldn’t have without Pearl Harbour. That Britain felt it was in existential crisis is clear from diaries at the time, and celebrated in many US newspapers at the time, you are overestimating Anglophilia in the US. Isolationism ran deep.
It was more likely surrender or accommodation rather than if Germany had taken the ussr. The colonists wouldn’t matter then.
Several months before Germany "had to" declare war on the United States, the United States Navy was ordered to engage and destroy German warships on the high seas, wherever they might be encountered.
The US *went* to war well before Pearl Harbor, we just didn't *declare* it. And we knew it was going to take another six months at least to get the Army in shape. so we kept the undeclared war purely naval at the outset.
This wasn’t a declaration of war. It was merely protecting surrounding waters.
It was an undeclared limited war. Shooting another country's warships on sight in international waters is usually considered an act of war, especially if it's accompanied by public statements of "Yes, we meant to shoot at your ships, and we'll do it again!"
There was also Lend-Lease, which started 9 or 10 months before Pearl Harbor. International Law at the time allowed for private sale of arms by citizens of neutral nations to countries that were at war, but only if the government policy controlling the arms trade was applied impartially among the belligerents. Allowing private sales to one side but not the other was contrary to provisions of impartiality in Hague Convention (XIII) of 1907, and governments suppling arms and other war materiel "directly or indirectly" to one side or the other was expressly forbidden. The Cash and Carry policy of 1939-40 was crafted to technically comply with the letter of Hague Convention XIII while still favoring Britain and France, since Germany couldn't afford to pay cash on the barrelhead for arms shipments and even if they could they'd have a hard time getting payments and deliveries past the British blockade of Germany. Lend-Lease (and the Sept 1940 Destroyers for Bases deal) crossed the line to where the US was no longer behaving as a neutral power under international law, even if we weren't (yet) actually shooting at the German military.
> It was an undeclared limited war. Shooting another country's warships on sight in international waters is usually considered an act of war, especially if it's accompanied by public statements of
It was in response to the Germans attacking American ships and actually sinking one - which on its own didn’t lead to war. It was also purely defensive. Presumably such a policy, announced or unannounced, applies to China now.
Lend lease certainly showed a bias towards the allies, but it’s not declaration of war either.
I know it’s absolutely against the post war American mindset to believe anything else, because it’s a post war justification of American intervention elsewhere - the arsenal of democracy against all the new Hitlers. To believe then that the US wouldn’t have fought Hitler 1 without the German declaration of war is anathema
I doubt it, although I would defer to anyone with actual military experience. The main two points that jump out:
#1 The thing the Nazi military was great at was blitzkrieg, like, super-fast armored assaults that defeat the enemy fast and hard.
#2 Nazi Germany's big problem is that they're at war with both the US and the USSR. You don't just have to beat the Soviets, you also have to beat the Americans. And yeah, allying with Japan keeps the Americans focused on the Pacific but what's the plan here, that Nazi Germany holds out against the Soviets until 1946-1947 and then the Americans will make peace with Germany, instead of wrapping up Japan and then pivoting to Europe?
Like, long-term, grand strategically, Germany cannot survive a long-term conflict against both the US and USSR. So either you make a stable peace with the USSR (nope, just nope), knock the USSR out of the war with an attack (plausible but failed), make peace with the US (maaaaybe, especially if you sell out the Japanese but I've never heard of anything like this getting traction), or knock the US out of the war (LOL).
I don't know, I have difficulty imagining anybody winning a long, defensive war against Soviet manpower and American manufacturing but this is very armchair theorizing and not an area I've studied too much.
Hi! This is off topic to the above conversation, but I was recently searching for and revisiting your reviews of cities you lived in.
Two questions:
1) Do you have there aggregated somewhere in one place, and
2) How is Houston two years later?
1) No, no aggregation
2) Houston has soured a bit but it's still a great city. Houston has two big problems which have grown over time. First, I know I said it's hot, but it's hot and it wears on you. Even by California Central Valley standards, it's too fricking hot. Second, more importantly, there's just no nature. No trees, and that's really started to bug me. I mean, there's Sam Houston forest and stuff but... it's not pretty, it's not Cali or Colorado or Oregon or Arkansas, it's just ugly swamp/scrubland. It's not even gorgeous open desert like El Paso or New Mexico. I'm missing nature a lot.
On the other hand, the entertainment and event options are insanely good, to an extent I think I've acclimated to without taking the time to appreciate. Like...I just automatically get season tickets to the Alley Theatre, Dirt Dogs Theatre, and Rec Room Arts and I'm just booked into seeing 14 solid-to-great plays/year with no effort, it's just a thing that comes up on my calendar on a random Tuesday. That feels very natural and normal to me now but outside of...maybe a handful of major cities that's not a thing. And theatre isn't the primary benefit, it's just like a side thing. The whole indoor events thing is great. Comedy clubs, concerts, sports...I got to see Weird Al in concert. That's just a thing that happens. It's not just that there are good options, it's that you can literally overfill your calendar with good options. Want good funky art stuff, like full size interactive installations? There's the Art Museum, Meow Wulf, and sometimes the MFAH. Even in incredibly niche stuff, you're spoiled for options.
>especially if you sell out the Japanese
Nazi Germany and Imperial Japan were allies in name only. There was practically no cooperation between them at all, so there was nothing to sell there.
On the other hand, had there been anything substantial to sell, since the USA had a "Europe first" strategy, Japan would have been the more likely beneficiary of breaking that alliance.
On the OTHER other hand, the dominant faction of Japanese leadership was convinced of victory and determined for Japan to die otherwise, so they wouldn't have sold anything even if they could have.
> Nazi Germany's big problem is that they're at war with both the US and the USSR. You don't just have to beat the Soviets, you also have to beat the Americans
They were at war with the USSR first, of course.
?
Yeah, the US wasn't officially at war when Germany invaded the USSR but it had started Lend-Lease and cut off oil to Japan, not to mention Germany and Italy had signed the...one sec, the Tripartite Pact, officially making the Axis powers. So yeah, the US wasn't "in the war", but it was pretty obvious which way US policy was, and had, been heading.
None of this is a declaration of war and without pearl harbour the US wouldn’t have joined.
By the time Germany was deep into Russia (late 1941) it looked like Germany had secured the continent. If there had been a defense of Europe strategy it would have happened earlier.
The Germans literally sunk an American navy vessel - the Reuben James - without response. 100 lives.
"None of this is a declaration of war and without pearl harbour the US wouldn’t have joined. "
That assertion needs a whole lot more support than you're giving it.
I'm pretty sure that without Pearl Harbor the United States would have waged the same sort of undeclared naval war against Germany that we waged against France in 1798, while building up our Army and (Army) Air Force until they were ready for action. Then found or engineered a Lusitania 2.0 level provocation, or hinted to Churchill that MI6 needed to fake up another Zimmerman telegram stat, which FDR would take to Congress and get a proper declaration of war.
And I can back that up by pointing to the naval skirmishes with Germany prior to Pearl Harbor, and to the policy documents committing the US to working with the UK et al to properly de-Nazify Europe by any means necessary. What have you got?
What i have got is that the US didn’t declare war on Germany, even after Pearl Harbour and wouldn’t have - as the diaries of the cabinet show - if Germany did not declare war on the US first. I like to deal with facts rather than speculation.
Wikipedia says about the Reuben James that "The destroyer was not flying the ensign of the United States and was in the process of dropping depth charges on another U-boat when she was engaged."
It further says about the Neutrality Patrol, of which the Reuben James was a part of:
"Roosevelt's initiation of the Neutrality Patrol, which in fact also escorted British ships, as well as orders to U.S. Navy destroyers first to actively report U-boats, then "shoot on sight", meant American neutrality was honored more in the breach than observance."
American strategy was to enter the war but let the Germans make the declaration thereof:
https://ww2db.com/battle_spec.php?battle_id=336
"The Neutrality Patrols continued through 1941 but were rendered moot by Germany's declaration of war on the United States on 11 Dec 1941. As part of Germany's justification for declaring war, they made specific mention of the Greer, Kearny, and Reuben James incidents, describing them as flagrant violations of any supposed neutrality. "
"The Neutrality Patrols were controversial at the time and remain controversial still. Roosevelt's own Secretary of War, Henry Stimson, believed the patrols were belligerent acts and he advocated Roosevelt to openly say so."
Static defenses are much more than above-ground fortresses. There are plenty of trench networks on both sides in Ukraine, in a very static ground war.
Just another update in "We're Totally Losing the Debate on AI Risk" (and thus perhaps
Scott Alexander,
Kokotaijlo,
Yud,
Soares,
Zvi,
or SOMEONE
should start debating face-to-face (or perhaps "just have coffee with") the people who radically disagree with them)
https://www.understandingai.org/p/the-case-for-ai-doom-isnt-very-convincing
Seriously, every-other-day my phone gives me another story about Yud & Soares being wrong.
If those who believe that AI is a great risk are sure that they are right, I think they should stop debating in these little niche spaces and instead throw themselves into changing public opinion. Scott and the rest do not sound to me like practical people, and I think they should hire people who are good at practical things without being unscrupulous. The group can work on scaring people about AI doom risk, but also AI slop. I feel confident that there would be impressively awful results from a study like this: Compare toddlers who spend several hrs/day watching AI slop (bright, loud, attention-getting stuff with very little order to it, and by order here I mean order that would make sense to a child age 18-36 months: simple stories with toddler-level drama to it, people acting on motivations toddlers can grasp, stories that have continuity and a recognizable beginning, middle and end) vs. toddlers watching kid vids from an earlier era that have the structural qualities I named above. Pretty sure the slop toddlers would not do as well on cognitive testing. That should get people's attention.
Society is obviously not going stop AI progress regardless of what they say. The realistic outcome has always been to increase awareness of the topic for now and be ready to advise leaders when society inevitably GOES FULL PANIC MODE if AI finally does something actually dangerous. If it's too late to intervene at that point, well, too bad for humanity.
There have been multiple video podcasts with these various personalities debating. I think—this is an honest take—they need to button up their looks. They kind of look funny, and some of them speak funny. People don’t tend to take arguments from funny looking & sounding people seriously.
> There have been multiple video podcasts with these various personalities debating
Could you point to me a good example?
https://youtu.be/9XuVn6nljCM?si=vPg6bEvZk_HRcPuI
https://www.youtube.com/live/6yQEA18C-XI?si=0ek8WV0Zs5HL1h09
I’m sure there are better ones available. Zvi generally covers them on his site when the come up.
"Moore's Law is a human law. We double the computer power every two years. But once computers do it, they will first do it in two years, then in one, then in 6 months, then in 3 months, - it's singularity"
Jesus wept. These men want us to take their ideas seriously.
Yes, they need a good-looking and charismatic spokesman. Preferably someone that has no connection to AI research or rationalism. They can be fed talking points from people who are more knowledgeable on the subject.
>Seriously, every-other-day my phone gives me another story about Yud & Soares being wrong.
That's probably because they are wrong. You should be careful to avoid trapped priors in your thinking and use this as evidence to update away from AI being an existential risk.
Good luck with that one. I spilled thousands of words on the topic in these here comment boxes, but trying to explain the basic realities of making stuff to people who have never done it and have 0 interest in learning is draining and I am done.
You know the saying: "It's hard for a man to understand something when his salary depends on him not understanding it", right? But it is 100X harder for a man to understand something when his identity depends on not understanding it.
Yud is an intellectual idiot who haven't done a day of real work and thinks physical world is just like words on screen. No, seriously, go read this gem: https://ifanyonebuildsit.com/6/wont-ais-be-limited-by-their-ability-to-design-and-run-experiments , and weep. The man who knows nothing about running experiments confidently explains how they can be "sped up" if only "smart people" think hard about it. Like, a 1000-hr HTOL test will somehow run in less than 1000 hours?
It's pathetic.
> But it is 100X harder for a man to understand something when his identity depends on not understanding it.
Yudkowsky's identity depended on the exact opposite of what you seem to think. He founded the Singularity Institute, which wanted superintelligence as fast as possible, and did a massive about-face in response to realizing the risks. In any case, he's mostly irrelevant (other than as a decent writer) given that his underlying position is shared by the top experts in the field (excluding those on Big Tech's payroll).
Name three of these top experts who are not computer scientists or "AI researchers", but have background in manufacturing or similar physical reality-based fields.
Look, I'm not saying computer scientists are "stupid" or anything of the sort. The point is, they deal with 1s and 0's, the bit-flipping field where progress has been relentless and accelerating at an exponential pace, and it's hard to see why it'd slow down. But they tend to handwave the fundamental irreducibly complex reality of the physical world - because they are typically not experts in it, don't have the experience of making things that move, cut metal, grow cultures, etc. Every time I read Yud or AI2027, whatever links folks throw at me, it's always the same - they gloss over the physical world at best, and utterly clueless about it at worst (that would be the "Corolla made of atoms" Yudkowsky, I can't emphasize enough how ignorant the guy is about many subjects he bloviates about with astonishing confidence).
So when they say, "AI will keep getting better", I don't have a strong argument against that (but no AGI in 2027, come fucking on, those chips are in the fab today, what do you expect to happen between now and then?). It's when they say, "and it will kill everyone" that I ask a simple question: "how"? How will it kill everyone? Can you at least try to model this? We know what it takes to kill a human, come on, get some basic modeling done, anything beyond Disney fairytales (literally!), something that an engineer can work with.
Crickets.
Manufacturers have enormous incentive to want to push technological development wherever and whenever they can. I just don't understand why you would think these people would be so impartial. Why should we put these people on a pedestal above theorists? Without theorists, manufacturers would not have new products to build.
I don't know what "put on a pedestal" means in this context.
I don't know who these "theorists" are and what their theories of manufacturing are.
I was specifically addressing the fact that computer scientists are not manufacturing specialists.
I was bringing up experts as a counterargument to things in the categories of "alignment will be easy", "AI will necessarily stop beneath human ability", "it will be nice by default" arguments. Most people, being members of a species that absolutely dominates the entire planet by merit of intelligence alone don't have an issue with the idea that intelligence -> victory, and don't need examples of "how could something smarter than me possibly outsmart me" to get it. (This isn't a counterargument to your point, just explaining why I commented like that.)
For "how-skepticism", I'd point you to Scott's https://slatestarcodex.com/2015/04/07/no-physical-substrate-no-problem/ or Yudkowsky's https://www.youtube.com/watch?v=q9Figerh89g .
Oh, I see. I certainly don't believe "alignment will be easy" or anything like this. FWIW I think "alignment" as a separate thing from "development" is impossible - we will learn how to deal with the system as we are developing it, not by somehow being able to "jump ahead". I'm agnostic as to whether ASI is possible, but I'm pretty confident it won't be here any time soon.
Scott's piece is interesting but honestly adds nothing to the discussion, "AI will bribe people" is entirely possible, but we already have people who want to kill and wreck. I chuckled at the "plan worthy of Napoleon" as if his Russian campaign was something to emulate. Another example, the : "its advice is always excellent – its political strategems always work out, its military planning is impeccable, and its product ideas turn North Korea into an unexpected economic powerhouse" bit is so naive as to make me question Scott's understanding of the world in general. I could go on.
The video is... I'll shut up now.
Napoleon was a devastating battlefield commander who crushed army after army for 14+ years! His enemies were in AWE of his generalship, and he had a fairly good rationale for invading Russia. Hate to see this dismissive attitude of a man who helped build the foundation of liberty in Continental Europe.
This guy, Timothy Lee, agrees with you: https://www.understandingai.org/p/the-case-for-ai-doom-isnt-very-convincing
He does't directly contradict any of Yudkowsky's points, but points to the complexities of the real world, which, as you point out, Yudkowsky sounds kinda ignorant of. Here's an abridged version of Lee's main point:
Yudkowsky and Soares believe that some systems are too complex for humans to fully understand or control, but superhuman AI won’t have the same limitations. They believe that AI systems will become so smart that they’ll be able to create and modify living organisms as easily as children rearrange Lego blocks. Once an AI system has this kind of predictive power, it could become trivial for it to defeat humanity in a conflict.
But I think the difference between grown and crafted systems is more fundamental. Some of the most important systems—including living organisms—are so complex that no one will ever be able to fully understand or control them. And this means that raw intelligence only gets you so far. At some point you need to perform real-world experiments to see if your predictions hold up. And that is a slow and error-prone process.
And not just in the domain of biology. Military conflicts, democratic elections, and cultural evolution are other domains that are beyond the predictive power—and hence the control—of even the smartest humans.
**********
When it comes to the points you made quite a while ago about the absurdity of the prediction we'd have functioning smart robots in 2027, you and practical knowledge absolutely win. You were right, and the writers of that AI 2027 thing look silly. If they didn't know much about factories and robotics they should have consulted someone who did. Lee, above, is making the same kind of point as you about the real world: It has characteristics that will slow AI down enormously and keep it from being an unstoppable force. But you, in your comments about robots in factories, had an actual factoid to rebut the claim with: If robots were going to be functioning in many places in 2027, there would be preliminary versions now in the places that were going to produce them, and the prelims aren't there. But you and Lee, don't have an equivalent factoid to point to in the general case. Yeah, I get that living organisms and military conflicts and political changes are -- what are they called, chaotic systems? -- and so inherently terribly difficult to predict. But I don't see how anyone can say an extremely intelligent future AI could not predict them. Where's the proof? So it's hard to know whether the point the Lee is making is a great common sense insight, or just comes down to a statement that in the world as he knows it certain things cannot be predicted or controlled, and he cannot imagine that changing. So maybe his arguments merely demonstrate his failure of imagination.
So, Fibonacci, I'm not trying to be a clever debater here. This is what I really think. Wut u think of it?
I swear I didn't read this: https://techblog.comsoc.org/2024/11/25/superclusters-of-nvidia-gpu-ai-chips-combined-with-end-to-end-network-platforms-to-create-next-generation-data-centers/
before picking the HTOL test as an example!
"a cluster of more than 16,000 of Nvidia’s GPUs suffered from unexpected failures of chips and other components routinely"
OK, Fibonacci, I get it. Your points are valid. But I am still worried about AI doing us in. So first I’m gonna make sure you see that I get it, then tell you why I’m still worried.
*I get it.*.
There are some things about how the universe works, such as “butterfly priniciple,” that limit what an AI, not matter how smart, could do. Additionally, they are lots of things about how the practical world works that provide a lot stability to the world as it is — or you could think of them as a sort of inertia what would interfere with a supersmart AI swooping in shoving things in a bad direction of its choosing. You’ve pointed to various examples of this, and I ran across one of them myself recently: AI diagnosis of, say, pneumonia via reading of lung images is more accurate than radiologists’ in tests. But in real life AI diagnoses from images are 20% or so worse than radiologists’, because of various real-world lumps and bumps. AI trained and was tested on high quality images from one hospital, on patients who had no complicating conditions that might affect the Xrays. In real life, images vary across different hospitals, some are of mediocre quality, some are of patients with complicating conditions, etc. Also, you need a different AI for each of many body areas, and radiologists would have to pay for and have available dozens of different AI’s to get through a day's work. AI doesn’t work on ultrasounds. And then there are complications having to do with whether insurance will pay for an AI-read image.
*Why I’m still worried*
1)I think Yudlowsky irritates you so much that you tune him out. I agree that he is dumb about the real world, but I don’t feel the personal irritation you do at him. I am used to weird smart people who are dumb about the practical world. And what I see is that lots of them are extremely smart about patterns, including patterns of ideas, patterns of observed regularities that suggest odd possibilities. It would not surprise me at all to learn that Einstein was as naive about the real, practical world as Yudkowsky. But he saw new patterns in physics behind the familiar ones. Some autistic people can recognize a 6-digit prime on sight. It’s a different kind of smarts from yours.
2) I know some of the examples in Yudkowsky’s book, such as the stuff about AI being able to just take over production of various things, can be refuted by pointing out the kinds of “inertia” real world things like factories and weather have. But those ideas are not Yudkowsky’s only reasons for believing AI is a grave danger. And I don’t think superintelligent AI would make the mistakes Yudkowsky did. If ASI was planning some kind of takeover that give it more power than mankind, I think it would recognize the limits of its control of chaotic systems, and the ways practical constraints would slow down various moves. Don’t you? I mean, you and I easily recognize those things. In fact I bet that if I asked GPT 5 right now what would limit AI;s control of the weather, or AI’s taking over manufacturing, it would name a lot of the same things you did. I don’t know what an ASi would do, if it somehow had the goal of ruling the world via controlling resources and having unbreechable defenses against all forms of human attempts to override it. I’m just smart, not superintelligent. But it does seem possible it would think of things you, I and Yundkowky have not thought of, things that would work in the world as it is.
3) Scott and Zvi take Yudkowsky seriously. I don’t know whether either of them knows how to change the oil in their car, but both have succeeded at a number of real-world things: training, employment marriage, kids. You don’t succeed at that stuff unless you can take into account the real world, not some inner version of the world you’ve dreamed up and are impressed to death by.
4) All my intuitions and common sense tells me AI would be dangerous as hell if it had will, goals, etc. Right now, AI is a machine that just sits there inert until we ask it to do something. It can’t learn from experience, can’t learn from direct instruction, can’t “think over” what it knows and find isomorphisms, reconfigure things, have ideas. Seems to me nobody has any good ideas about how to make it able to. Personally, I think further development of AI is going to involve some sort of AI/human hybrid — either AI somehow trained on a developing human being, or AI that uses the brain of human beings bred and used only for this purpose, or human beings that are able to lead human lives, but somehow have continued and instant access to a very smart AI. And I *know* there is no way to guarantee a human being is aligned with the rest of humanity.
5) “The universe is not only stranger than we know, but stranger than we can know.“ What if ASI understands some strange things, things we cannot ever know. With this in the back of my mind, I ask myself about ASI and the limits on predicting chaotic systems: sensitive dependence on initial conditions, + if you measure them extensively you change them. Could an ASI that understands strange truths predict and control a chaotic system. And then I think “yes, by becoming the system.” I know that sentence doesn’t exactly mean anything. It’s not the moon, it’s my finger pointing at the moon. My way of contemplating the idea that there are degrees of intelligence that might permit insights that make impossible-seeming things possible.
Ere, thank you for a thoughtful response. Let's see if I can address your points within a reasonable word count.
1) I don't tune Y. out. I listen to him, and every time he opens his mouth/types words he proves to be an utterly ignorant buffoon. Look, when I was 20, I was at least as "smart" as I am now, but I started working as an entry-level engineer. Why? Because I knew nothing about engineering even though I was very smart. Well - Y. never EVER even started as an entry-level anything, he never learned anything about the subjects he bloviates about. Let me give you a perfect example, and if that doesn't illustrate how stupid the man is, I don't know what else to do:
He talks about ASI "taking our atoms" because it needs it (see https://www.youtube.com/live/6yQEA18C-XI ). First of all, atoms? Seriously? Atoms?!! Does he think that a mix of H2 and O2 is the same thing as H2O? That the ASI is so fucking dumb as to take everything in sight, without regard of what it is and somehow take it apart down to atomic level? Do you not see how insanely stupid this is? But let's keep going, the man just started showing his ignorance. When asked a perfectly good question, "why our atoms, we are not made of anything special, these atoms are abundant all around us", he ends up giving an example: ASI can burn you to release the energy.
Burn! Ere, a human is 70% water, we don't burn, it's impossible to burn a human without massive amounts of extra fuel, go ahead, buy a pork shank and try igniting it. How.... dumb does a man have to be to use this as an example of how ASI(!) will use humans.
But forget that: he thinks that an ASI won't be able to recognize the value of a complex low-entropy entity that a biological being is and just wreck it, wasting incredible amounts of energy (how do you think you get H and O out of H2O?) to destroy a self-replicating supercomputer and a super-robot in one package? This incredibly smart thing won't understand how valuable humans are to it? It won't be able to use them to achive its goals? And won't instantly understand (remember, it's an ASI), how best to use humans, that they fulfill their potential best when they are happy?
The man is a truly special case of knowing much and understanding nothing.
2) let me just say first that ASI doesn't exist, and it's not at all clear such a thing is even possible (we don't even know how to define it), and it basically is another way of saying "a fantastic being that can do anything", i.e., God. Or Satan, we don't want to discriminate. But in any case, no, it can't control chaotic systems no matter how smart it is, and it can't shortcut computational irreducibility, no more than it can break the second law of thermodynamics (which is actually an expression of computational irreducibility according to Wolfram).
3) Scott and Zvi are very smart, but, as we established in 1), that doesn't make them experts in anything. Scott, for one, has holes in his epistemology that a Mack truck can drive into, and yet he refuses to recognize them. See, for example, his debate with Skolnick about schizophrenia genetics, which was so bloody disappointing it forever reduced my opinion of him. But then he's a trained psychiatrist, why would I overweigh his opinion on tech vs., for example, the guy who started three robotics companies: https://crazystupidtech.com/2025/09/29/irobot-founder-dont-believe-the-ai-robotics-hype/
4) actually I don't disagree with this one. AI is dangerous because it's a powerful tool, and as any powerful tool it can be used for good and evil. My objection is to the specific fantasy of AI getting up from its bed and murdering everyone.
5) No amount of "understanding" can predict behaviors of real world. This is again the computational irreducibility of the universe: if it's impossible to predict what an Nth line of Rule 30 is going to look like without going through the full computation up to the Nth line, what hope is there to predict how, for example, a viscous fluid will behave? By the way, the simple-looking Navier-Stokes equations that describe it are generally unsolvable, ASI or not.
And again, "ASI becoming the system" is... look, please don't take it as an insult, it is a "profundity", a deep-sounding meaningless statement that reflect a religion-like view of ASI, basically equating it with God. Worrying about ASI destroying our world is at this point pretty much the same as worrying about God destroying our world, and should have about the same level of actionable response, i.e., do nothing different.
I’m not trying to have the last word, really just signing off with a few last comments — though feel free to come back at me if you want.
About various idiot things Yudkowsky has said, eg AI will disassemble us for our atoms, or use us as torches. I agree they are dumb. I could have seen what was wrong with those statements when I was 16, based on high school science. That does not disqualify him in my mind as a judge of what to expect to happen as AI development proceeds, because it seems to me the situation is so profoundly novel that scientific and practical knowledge may not be that helpful. They certainly are helpful in thinking about how AI of the present and near-future era will affect manufacturing, health, the economy, etc. I’m talking about the bigger and more general question, which really has 2 parts: What will AI 30 years hence be capable of? And how likely is it that it would do our species great harm either intentionally or unintentionally?
What 30 years hence AI will be like seems like a very difficult and challenging question, one that calls for deep and original thought about how the human mind works, how computers and machines work, what kind of modifications are possible to the type of AI we now have etc. It might call for a new paradigm. I thinking here about the 2 smartest things I’ve ever understood — Wittgenstein on language and mind, and relativity. I doubt that either Einstein or Wittgenstein was a bit practical, or gave a shit about practical knowledge. They had minds that called up huge and novel patterns and played around with them. Some of what they would have had to say during that process would have sounded naive and dumb to most people: “What if space and time are sort of like the warp and woof of one piece of fabric?” So when I worry that Yudkowsky is right, I am not thinking at all that he is better than you at smart practical science-based thinking — I’m wondering whether he is doing genius pattern matching. He is obviously autistic. Maybe he’s doing the equivalent of recognizing 6-digit primes.
I agree that “super-intelligent AI” is used as though it means God, and actually it doesn’t mean anything. But I think the idea that AI 30 years from now could do astounding things, things that depend on paradigm changes, is not absurd (though also not guaranteed). That’s why I refer to it above as future AI, no ASI.
And there was one time when you skated right over something I’d said:
I said: “I ask myself about ASI and the limits on predicting chaotic systems: sensitive dependence on initial conditions, + if you measure them extensively you change them. Could an ASI that understands strange truths predict and control a chaotic system. And then I think “yes, by becoming the system.” I know that sentence doesn’t exactly mean anything. It’s not the moon, it’s my finger pointing at the moon. My way of contemplating the idea that there are degrees of intelligence that might permit insights that make impossible-seeming things possible.”
Your response:”And again, "ASI becoming the system" is... look, please don't take it as an insult, it is a "profundity", a deep-sounding meaningless statement that reflect a religion-like view of ASI, basically equating it with God.”
I *know* that. I *said* that, I literally said “I know this sentence doesn’t exactly mean anything.” What I was trying to point at was the idea of paradigm shifts in science and physics, and the idea that future AI might be capable of some shifts that made possible things that now appear utterly impossible. If I were to take it a little further, I would say that maybe chaotic systems are like space or time, and if you change the paradigm so that space and time are warp and woof of a single fabric new possibilities open up. NO, of course I am not sure of that. NO, I do not have a new paradigm in mind. NO I am not convinced that future AI would be capable of coming up with a profound paradigm shift. I’m just reminding you that such things happen, and that it makes sense to consider that a future artifiicial intelligence would be capable of such a thing.
PS I appreciated your nail gun comment to that Wormwood creep.
> He talks about ASI "taking our atoms" because it needs it (see https://www.youtube.com/live/6yQEA18C-XI ).
That video is full of pure gems.
George: I have AI, I'll gang up with other humans who have AI.
Eliezer: Woah woah woah, you have AI? Or does the AI have you?
That's some deep shit, man…
Two points:
1. Lee still(!) understates the unpredictability of the world. He talks about complex systems, but the thing is, even simple systems (example: https://en.wikipedia.org/wiki/Rule_30) can display unpredictable behavior, meaning that the only way to know what the system state will be in the future is to go through its operation, there are no shortcuts. This is what Wolfram calls "computational irreducibility", the impossibility to "jump ahead". For example, think about why a 3 nm process node was not available 20 years ago. Why did we go through 350, 180, 120, ... 22, 18, 11 nm first? We did know all the advantages of going to 3 nm, but we couldn't go there without going through motions of shrinking the transistor one step at a time. Leaping ahead was impossible even though we knew exactly what would be needed to do this.
2. Specifically to "chaotic" systems such as weather: the reason no amount of "intelligence" or "compute" can make predictions beyond certain point is because the state of the system at a far away point in time is dependent on the initial conditions to such a degree that even a tiniest fluctuation in them results in a massive change at that point in time (the "butterfly effect"). This means it doesn't matter how advanced the algorithms are, how smart the AI is, it's predictive abilities are not gated by intelligence but by accuracy of starting data.
And those data come from real-world sensors, and those are expensive and slow to make and need to be installed. And then things ger worse once active agents impact systems, all predictability goes out the window.
These fundamental limitations appear to be utterly ignored by Yud et. al., as if more lines of better code can spin weather sensors out of the ether and place them at every square foot of the planet.
> when his identity depends on not understanding it
"AI Doomer" is not Yud's identity mor any of these people's identities. They have careers and lives beyond this. And many of them would happily just go back to "boring old AI development" if given the chance.
> Yud is an intellectual idiot who haven't done a day of real work
He worked at Bell Labs. Or do you mean Real Work as in "fixing a toilet"?
> The man who knows nothing about running experiments confidently explains how they can be "sped up" if only "smart people" think hard about it.
These types of arguments remind me of the "God of the Gaps" arguments in the debates between Christians and Atheists. "Science can't explain this, Science can't explain that" But then that it does. So Defenders Of The Faith retreat to another unexplained "gap" in our understanding of the cosmos.
Well, replace "explain this" with "speed this up" and we basically have the same argument. "A, B, C, D, X, Y, Z can never be sped up!" (until they are)
Protein-folding was an impossibly complex problem, until AI researches solved it, and then found more protein-shapes in a few years than the past half century. By a factor of a 1000.
I really wish people like you would come over to our side of the fence and notice the Gigantic Grizzly Bear rapidly bearing down on us.
"He worked at Bell Labs."
No he didn't, lol lol lol. His FATHER worked at Bell Labs.
Yudkowsky hasn't worked a day in his life. Never mind that, he hasn't even formally studied anything, you know, in a setting where one has deadlines and standards and a professor can tell you that your work sucked and you have 48 hrs. to resubmit.
That guy. That guy wants to bomb datacenters that he wouldn't know how to turn on if his life depended on it.
>Yudkowsky hasn't worked a day in his life
the man is a published author. Writing is work, and judging by the audience he's amassed (and my own opinion, although you may disagree - still, no author is everyone's favorite) it's work he's quite good at.
Yes, he has a gift of gab. He’s good with words. But that doesn’t make him an expert in anything else. For example, “Corolla made of atoms” is a grammatically correct snippet that is also meaningless drivel as far as modeling reality is concerned.
He’s a Greta Thunberg of AI, understanding little and pontificating much, turning the field into a clown show. She had her “how dare you”, he has his “everyone will die”, with about the same convincing power, i.e., asymptotically zero.
"He’s a Greta Thunberg of AI"
That is *savage* 😁
> No he didn't, lol lol lol. His FATHER worked at Bell Labs.
fair
Notice how all these impossibly complex problems are related to "knowledge": we didn't know how to solve protein folding, now we do. We didn't know the structure of human DNA, now we do. The LLM didn't know how to write MATLAB code, now it does.
Now what?
What are we doing with this knowledge? Where are the amazing genome-matched therapies we were all promised?
Well, they are probably still coming, but using the knowledge to create real things takes time and effort that cannot be compressed beyond very strong limits no matter how smart the people/machines become. Some experiments can be paralleled - at massive cost, mind you - but they cannot be shortened, see my HTOL 1000-hr test as an example. And the whole point is that the outcomes of the experiments cannot be predicted, which is why we run them.
FFS, we can't even predict a performance of a basic IC without models derived from experimental data!
We advance real world progress one experiment at a time. The physical reality is fundamentally un-modelable from any sort of "first principles", it's irreducibly complex and we can only create occasional short bursts of computational shortcuts, not infinitely self-improving loops.
human genome project was a great example
A great example of what?
Your comment implied that there were no good examples in the linked document. The poster is making the bare assertion that no, the human genome project is a good example.
Example of what?
Reading your post, you say "knows nothing about running experiments", so I presume that's an example of a well run experiment that was under budget and data efficient.
In other words, we're not doomed any time soon? I don't follow this whole thing very closely and only vaguely recognize most of those names.
No, lol. At least not from "AI" killing everybody. This bit of bad sci-fi can be safely stuffed into a trash bin.
Oh okay, I didn't realize that's an idea that AI-educated people took seriously. I assumed that what people are worried about is AI+the internet confusing reality for most people to the point where it's a huge problem, or something along those lines. I feel like that idea is pretty realistic, but then again, that's just a feeling based on my minimal use and knowledge of AI.
"AI+the internet confusing reality for most people"
Apparently, if I believe media articles, it's happening already. People are using chatbots as their romantic partners, therapists, and marriage guidance, and since AI is that oleaginous agreeable reinforcement dialogue partner (you are so smart! you are right! anyone who disagrees is in the wrong!) this is having devastating effects.
Granted, if you're asking AI for advice, your relationship is already rocky, but the AI is being used as a bludgeon with "see, I'm right, the AI agrees with me and we all know AI is always right".
https://futurism.com/chatgpt-marriages-divorces
"Even Geoffrey Hinton, a Nobel Prize-winning computer scientist known as a “Godfather of AI” — a technology that likely wouldn’t exist in its current form without his contributions — recently conceded that his girlfriend had broken up with him using ChatGPT.
“She got ChatGPT to tell me what a rat I was… she got the chatbot to explain how awful my behavior was and gave it to me,” Hinton told The Financial Times. “I didn’t think I had been a rat, so it didn’t make me feel too bad.”
This is just reinforcing my pessimism about how AI will destroy us - not because it becomes super-intelligent and self-aware, but because we humans are stupid and we will happily hand over decision-making ability to the machine because it relieves us of the work of having to think for ourselves.
I'll be somewhat more than my usual pedantic and say your pessimism here might be misplaced. For one thing, Hinton apparently never believed what ChatGPT said. For another, the narrative in progress here is that Hinton's girlfriend made a mistake in trusting ChatGPT, and that mistake was so big that it might justify them breaking up anyway.
If that's the case, then AI was a benefit here. Who knows how many terrible-but-hidden matchups it could spot and prevent? Imagine all the broken homes and traumatized children that never come to be!
Thirdly, Hinton is 77; assuming his ex-GF was about as old, then frankly, I'm not sure how much was going to result from that relationship either way. Admittedly, that take some wind out of the sails of my second point, but if you're concerned about younger people making the same mistake, then the wind goes right back in.
Just yikes
Tons of experts voiced concerns about aii killing everybody. The usual names are Geoffrey Hinton, Yoshua Bengio, Demis Hassabis.
If you must decide using the advocates' prestige instead of evaluating the arguments yourself, I'd still go with them instead of anonymous substack commenter who "knows the realities of making things".
G. H. - computer scientist
Y. B. - computer scientist
D. H. - AI researcher
Wake me up when, for example, a semiconductor reliability engineer publishes a doom scenario showing how AI can run a fully automated fab recursively improving yields and throughput.
"Prestige" is not working for us though, because it's dwarfed by the Institutional Prestige of magazines and media outlets that are against us (even if the particular articles are written by Joe from Sales).
The "Prestigious Names", imo, need to Get Out There And Verbally Debate.
This war of strongly-worded blogposts is unwinnable.
Can anyone recommend a good news site that outlines which news sources are reporting an event, separated by a "right - center - left" categorization? I already know about Ground news, any others?
I only now about ground news and only from their marketing. Have you tried it and were disappointed? If so why? I am conceptually interested in approach but have never tried any myself
also try allsides.com.
Seconded; this is part of my daily morning feed.
I have tried it, and while I'm not disappointed per se (it does what they claim it will do), it doesn't do everything I might want. Ground news seems configured to present it's users with a specific event or news story, and then break down who is reporting it. It doesn't really have a time-efficient way of laying out what the differences are between right - center - and left reporting on these stories, so I am interested in what other sites might be offering, and if there is anything even better.
readtangle.com covers one topic a day with right/left/site's take.
Thanks!
Going to try to quit Substack to finally read that book of feminist theory. When I'm back I'll be trans, a redpill misogynist, or dead by my own hand. Wish me luck!
You know, you just might learn something about yourself.
Godspeed.
You'll be missed, AD. Just remember, katabasis is always part of the hero's journey, and ideally makes you come back stronger and wiser!
From experience across several decades of tech jobs, and if you are anything like me, at the end of the first day (or first few days), you will absolutely hate the place -- the work is incomprehensible and there's no useful guidance anywhere, the people are standoffish at best, there has been *no* consideration given for supporting you in your work (like actually arranging for you to have a computer, chair, desk, phone, or whatever -- let alone training or any anything else "touchy-feely" like that).
That's slightly exaggerated, but only sightly -- each of those things (individually) has actually happened to me personally. (Yep, even starting a programming job: me "what computer can I use", them: "er...")
Thing is: many of those jobs I liked or loved after a while, while a few turned out to be bad mistakes for me.
What I'm trying to say is that feeling early-on that you should never have taken the job, or that you'll never be suited to it, is (at least in my experience) normal, and to be expected. It provides no information (doesn't shift the Batesian credences non-trivially) about the your future in the jobs. Above all *GO BACK NEXT DAY*. The marginal cost of sticking it out for a few more hours/days is very low, the expected payoff is very high, but (if you're like me) if won't feel like that.
(Of course, none of that applies in cases of *actual abuse* -- but that's never happed to, or near, me so I can't comment further)
For some reason the above has turned up as a top level post despite me entering it the "reply" box to "Ebrima Lelisa". Sorry about that. I shall not attempt to use Substack again...
This is a common bug that happens when commenting using the app, it's probably not your fault. Commenting on an actual laptop almost never messes up in this way.
Thanks for taking the time.
FW[little]IW, I was using the web interface via Firefox (fully up-to-date - 143.0.1) on Linux.
Happened to me in exactly this way this week.
1) Is it conceivable that consciousness is a evolutionary contingent outcome?
That is, given a sufficiently complex system, organic or inorganic, capable of displaying intelligent behavior, learning, playing chess, composing poems, the whole hog, but still entirely unconscious, this is in fact the norm and what might be expected, and that we are conscious, is actually an evolutionary accident, unlikely to ever repeat?
Particularly, if consciousness itself is a language construct, as Jaynes suggests.
2) Is it possible that simulation can not capture everything that is in the thing or system we are trying to simulate?
For physics only captures metrical properties of things, and this leaves a possibility of non-metrical aspects of things, first of which is the existence of the thing itself.
Now, simulation, even in principle, can only simulate, what has been put into numbers. That it, it only covers metrical aspects, with non-metrical aspects outside the realm of simulation.
Now, consciousness is possibly related to non-metrical aspects of conscious matter. If so, simulating matter, howsoever faithfully, even atom by atom, will not capture consciousness.
For consciousness is inherent in the conscious matter such that no simulation can yield consciousness.
Some behaviours just don't make sense without consciousness, so there is absolutely no chance humans are the only conscious beings.
However, consciousness might still be an evolutionary accident. Some people hypothesise that it's almost like a parasite that is actually worse for survival long-term, because the environment cannot properly adapt to conscious decisions.
1) Conceivable, but extremely unlikely, given what we know about how the brain works. Conscious deliberation appears to serve a critical role in learning from phenomenal experience in the moment. It's the function in the mind that assigns an emotional tag to physiological reactions, and that in turn organizes the long term memory. It is the gatekeeper to the self-concept, also stored in memory, which guides future behavior. Seemingly, a non-conscious entity would not be able to carry out those functions.
2) If the simulation is good enough, it should simulate everything, given that everything in the universe is governed by material causes. I suppose you could argue that a simulation that produces the actual thing isn't a simulation anymore, rather the thing itself, but that seems like semantics.
But the point is that cognitive processes seem not to lie in the physical structure of the neurons themselves, but in the patterns of information exchange between them. Given that, anything which reproduces those patterns should reproduce their outcomes.
Even if we know how brain works, this is far cry from knowing how and why of consciousness--the hard problem, as you must know, it is called.
Your point about simulation is what I am challenging. Please note how one models. We leave out something, and even a most exact model necessarily leaves out something. A simulated hurricane is not a hurricane. A simulation of neurons is not and can not be neuron itself. It must, by the very defintion of model, leave out something.
All physics model do this. Maxwell's equations leave out the actual experience of electricity and the actual physical objects which are replaced by numbers.
You wrote: "That is, given a sufficiently complex system, organic or inorganic, capable of displaying intelligent behavior, learning, playing chess, composing poems, the whole hog, but still entirely unconscious, this is in fact the norm and what might be expected, and that we are conscious, is actually an evolutionary accident, unlikely to ever repeat?"
I don't have to solve the hard problem of consciousness (that is, I do not have to demonstrate how it develops) in order to outline what functions it serves. Since it appears to serve critical functions within the mind, I propose that no organic system can evolve human like intelligence without an actual consciousness. AI does not disprove this assertion, because a human like intelligence designed it. I'm making a point about evolution in nature. For that to happen, those functions would have to be served by some other cognitive process. Which one?
As for simulation, you were asking if it might be impossible to simulate a system under study. For practical reasons, yes, it may be impossible to reproduce a hurricane perfectly on contemporary computers. But I thought you were asking conceptually, is it in principle impossible to accurately simulate a system. In theory, no, it shouldn't be. Provided you have precise date regarding the relationships between every element in the system, and you have enough computational power to run those relationships over time, there is no theoretical barrier to reproducing that system perfectly. Whether we could ever, in real life, do that for something as complex as the human mind is a different issue.
What does simulation of something mean? For a hurricane, it is reproducing the flow as a field. But it is all numbers. There is no hurricane.
Now we are getting into semantics, as I warned. When scientists create a model of a real world phenomenon, like a hurricane, what they are attempting to do is test whether or not the real world phenomenon follows a mathematical model, and how closely. These models are based on various theories about how the real world phenomenon works - in the case of a hurricane, how air molecules exchange energy over time in a space. If the model is based on these theoretical ideas, and the model behaves in a very similar fashion to the real world phenomenon, then this is take as evidence that the theories are close to being an accurate description of what is happening in the phenomenon (the hurricane). Models like this are never 100% predictive, because we have only incomplete data regarding the actual behavior of air molecules in hurricanes. But what if we had 100% precise data of every quanta of energy exchanged by every molecule? Then, in theory, the mathematical model, and the hurricane, would behave precisely the same. It is probable that we can never do this, because we can never get that precise with our data, and our computers do not possess the computational capacity to run the model if we did, but there is no a priori reason why a simulation cannot come arbitrarily close to the real thing.
This is why artificial general intelligence is a concern. In theory, there is no reason why a more precise model of the human mind than we currently have could not reproduce various cognitive processes going on inside it, including conscious self-awareness. I do not believe that we are anywhere near producing that outcome, but I can't think of any reason why it would be impossible.
Even a 100% precise model is a model and not the actual thing.
The model is just numbers while hurricane has atoms and molecules.
1) It is conceivable, but how would you falsify this?
2) It is true that science by definition can only capture what we can measure (metrical), and by extension, physics as a branch of science can only capture what we can measure.
I'm not sure the same is true for simulation - e.g. simulations could capture non-metrical properties by accident.
I think you are confusing the map with the territory. A simulation could, in principle, be conscious. If you are correct that consciousness is not measurable, then it follows that we have no way of knowing that our simulation i conscious (at least by science). It does not follow that the simulation does not cause consciousness by some unknown mechanism.
Well, one can not rule out that a simulation may capture non-metrical properties by accident. So, it can not be ruled out that a simulated human could be conscious, But nothing guarantees it.
That's all I am saying. You can have a perfectly simulated human in silicon, behaving pretty much like an ordinary human, at least for short duration, and who would be perfectly non-conscious.
This is what I would expect.
>Well, one can not rule out that a simulation may capture non-metrical properties by accident. So, it can not be ruled out that a simulated human could be conscious, But nothing guarantees it.
Agreed, but who is arguing that it is guaranteed? I think we can't know one way or the other - though if you have a simulation that gives output exactly like a human, it seems conceivable that it might be conscious. After all, the only evidence I have that other human beeings are conscious is that they are similar to me. You clould well all be p-zombies!
Also, if you believe in a mechanical world governed by some underlying rules, there must be some replication of a human brain that would produce consciousness.
>This is what I would expect.
Why would you expect that? I think we have no idea how consciousness is produced, or really even a good grasp of what consciousness is!
It's also possible that gravity is the suction of the eternally hungry earth's core, craving to pull us into its mouth. But lots of other things about how the universe works strongly support a different model of whattup with gravity.
Let me add that conscious human is not behaviorally identical to a non-conscious or pre-conscious one.
A conscious human can plot deceit for 50 years without acting. A non-conscious perhaps not even 50 minutes.
You should know this is an idiosyncratic definition of "consciousness."
I am working with Jaynes' definition.
If so, you don't seem to be doing so consistently.
Are you talking about vitalism?
https://en.wikipedia.org/wiki/Vitalism
Well, it is similar, only the cleavage is between conscious and non-conscious; not between living and non-living.
Generally I'm very suspicious of non-physical explanations for physical phenomena. If something non-physical caused consciousness, how would that thing "interface" with the physical world of brains and neurons? Maybe we simply don't know enough yet, or aren't smart enough, to have a useful model of all brain functions that is amenable to simulation. That doesn't mean we will never have one, and it certainly doesn't mean that we *can* never have one.
There is nothing non-physical but the idea is there are aspects to physical objects that can not be captured by physics.
Related:
https://en.wikipedia.org/wiki/Blindsight_(Watts_novel)
https://en.wikipedia.org/wiki/Echopraxia_(novel)
Interesting that
"Watts himself dismisses the idea that humans have free will as a "farce" unworthy of serious debate. "I don't have much to say about it because the arguments seem so clear-cut as to be almost uninteresting. Neurons do not fire themselves. [...] The switch cannot flip itself. QED." "
He can not be sympathetic to the view sketched here.
If by free will you mean that your brain takes input, processes it, and then makes decisions that have implications in the real world, then I think humans have free will.
I have come to think there is no real paradox between this and a mechanical world view, it's just at a different level of abstraction. The neurons may fire in an entirely mechanical way, completely determined by physics, and the effect of this is thoughts arrive in the brain, and decisions are made, i.e. free will.
The alternative is that there is something outside the physical world making the decisions. That seems to lead to all sorts of problems. Probably that is the (naive) version of free will Watts is dismissing.
There's more than two theories.
Different definitions of free will require freedom from different things. Much of the debate centres on Libertarian free will, which requires freedom from complete causal determinism (and therefore, freedom from inevitability)
(Watts seems to be requiring fully uncaused causes -- neurons that fire for no reason at all -- rather than not-fully-deterministic causes).
Compatibilist definitions of free will only require freedom from compulsion, and allow free will to exist in a deterministic universe. Sam Harris believes free will is a form of conscious control.
Libertarian free will has sub-varieties.
One is "contra causal" free will, which requires freedom from physics, on the assumption that physics is deterministic. This is often connected with the idea of a supernatural soul, that is able to override the physics of the brain. In contrast, naturalistic libertarians seek to find free will within physics, by rejecting physical determinism; they regard indeterminism as a necessary (but perhaps not sufficient) condition of free will.
Thanks, I really should get more into the litterature on this.
I think what I'm saying is I agree with the compatibilist definition, and reject both the versions of libertarian free will.
I think the more interesting rejection is the naturalistic version (given that I understand it correctly). What does it really mean to have a choice in this sense, within physics? So first I think it requires that there is somehow a counterfactual. You could turn back time or there is an alternative universe where a different choice is made (or at least this is imaginable). I think that implies that given the same input and the same exact state of the brain (including the sum of all knowledge up to that point) your brain could decide differently. But this only seem to imply that there is some random component to your decision process. If that is the case, I think it is not really what most people think of when they say free will. So really, the naturalistic free will looks like an illusion, or no more free will than the compatabilist version. So my conclusion is that the naturalistic definition of free will really requires accepting the compatabilist view.
I'd be interested to hear your thoughts on this.
Naturalistic libertarians talk about torn decisions , where you have fairly strong reasons to do more than one thing. An undetermined choice between two things guy have decreasing to do cannot leave you doing something for no reason.
This point is explained by the parable of the cake.
If I am offered a slice of cake, I might want to take it so as not to refuse my hostess, but also to refuse it so as to stick to my diet. Whichever action I chose, would have been supported by a reason. Reasons and actions can be chosen in pairs. In the case of the cake argument (diet, refuse) and (politeness, eat).
"If by free will you mean that your brain takes input, processes it, and then makes decisions that have implications in the real world, then I think humans have free will."
That is not what free will means. That's just will alone. Free will means that some component of the will is free from the chain of cause and effect. It originates behavioral sequences, or, put another way, it is an original source of a causal chain.
However, others have defined free will as the source of cognitive outputs that cannot, even in theory, be predicted by any means available to us, and yet manifests a structure such that it isn't random either. Nonlinear systems are one such example. If the consciousness is a nonlinear system then it might appear free from our point of view.
>That is not what free will means.
I think that depends on your definition of free will. Though I disagree that your definition of free will is a good one, considering what is normally implied by free will.
As I argue in other comments, by your definition of free will there is no free will. But people often imply this has condequences.
For example people will often argue that if there is no free will, then someone is not responsible for their action. I believe this is a false conclusion given your definition of free will. Choices that have real world consequences are very much made in your mind, and it makes perfect sense to hold you responsible for your actions even if things could not have been different. After all, I think saying that things could have been different is either a rejection of entropy or the flow of time or an endorsement of some form of parallel worlds theory, that require that there is some random component. It seems that we can't turn back time - which implies that everything that happened (in this world of you believe in parallel worlds) must have happened exactly like it did in fact happen.
It's the definition historically used in philosophical debates about the issue. What the implications are, whether people are "responsible" (whatever you think that means) for their actions or not, or whether free will is possible under the casual assumptions of positivism, are not germane to the definitions itself. Besides, it's not the overall concept of free will that I am disputing, it's the use of the word "free". No one disputes the existence of human will, but whether that will is free. Free of what? External causal forces. If the brain is not to some degree free from external forces, then it's behavior isn't "free."
I have a feeling that if I pulled a bag over his head, locked him up in a cellar, tied him to a chair, and started torturing him he'd believe fast enough I had free will and that replies of "Sorry, Pete, I can't help myself, I can't choose to behave otherwise, this is all the complex interplay of atoms bouncing around mechanically in the meat standing before you" wouldn't cut any ice when he was begging me to stop.
Now, on a grand theory scale of "since the creation of the universe this was all foreordained due to the inexorable laws of nature", fine, sure, no free will.
On the level of "You are sticking a gun in my face and telling me hand over all my financial information or you'll blow my brains out", we certainly believe in free will, else why go to court over this? We don't prosecute rocks for falling downhill in obedience to the inexorable laws of nature and breaking our car windows.
It's possible to torture you until you recant seems an argument that's … well, in keeping with certain traditions.
Prosecuting rocks wouldn't make future rock falls less likely. We do, however, preemptively imprison (fence/concrete) or execute (blast) rocks that might fall to avoid that. People are predictive systems, so prosecution and its aftereffects presumably do help avoid future crime, as does revenge, in many cases.
You ask the person to stop because that might cause them to stop. There is no free will involved. Screaming for mercy is simply another external input having a deterministic effect on their behavior (the victim might not be able to predict that effect, but that's not germane to the argument). If the torturer fails to stop, that isn't free will, that's just other external inputs (the mutation that turned them into a psychopath) having greater weight when the behavior is produced.
As for what causes us to believe in free will, that's another issue altogether.
I dunno, Deiseach, if he was really brave and snotty he might creep you out pretty good by observing that he knows you couldn't choose to do otherwise than to say "Sorry, Pete, I can't help myself, I can't choose to behave otherwise, this is all the complex interplay of atoms bouncing around mechanically in the meat standing before you."
Per Aquinas, stones move by necessity, sheep by instinct and people move freely.
I never fully understood this. Does it mean that there is not a dichotomy between people/non-people but rather a trichotomy of non-living, living and people.
>On the level of "You are sticking a gun in my face and telling me hand over all my financial information or you'll blow my brains out", we certainly believe in free will, else why go to court over this?
One could reframe this as "we certainly believe that we can't predict the future for certain." A somewhat nebulous, untestable concept like "free will" need not enter in any fashion.
Glad to hear from all the people who are sure all we are is dust in the wind 😁
https://www.youtube.com/watch?v=tH2w6Oxx0kQ
"Free will" is a metaphysical concept, not an empirical scientific concept. It has nothing to do with prediction.
A person whom you know well could even be more predictable than dynamics of 3 body problem.
Not sure I agree. I think there is a potential phenomenon to be explained (a mind that causes it's own behavior) and a metaphysical explanation (souls or essence or what have you). But a brain that has the capacity to originate its own behavior, apart from the material chain of cause and effect in the universe, isn't a metaphysical concept, it's a scientific one (albeit one that might not exist).
If "free will" is a certain kind of will, what kind of will is its opposite? Wut do we call it?
I'm starting a new tech job and I'm terrified. This is my first full time gig. I graduated and looked for months before finding this job. I'm terrified that I'm going to mess it up.
Please, any and all advice is welcome
Manager Tools has an excellent series of podcasts about starting a new job. https://www.manager-tools.com/map-universe/new-job
Adding to my other reply (which Substack has unfortunately buried at top-level):
Take notes. In real-time, not afterwards. The notes themselves are likely to help more than you'd expect (unless you are habitual note-taker anyway); being *seen* to make notes will give a good impression in many ways.
Second this, but both real-time and afterwards. At every job I had, I kept a text file filled with "how to do xxx", and it was super useful. Also, if you're committing bug fixes, log them all in a similar file, also can be very useful.
Use ChatGPT to help you, it's really good.
As new to the company AND a new grad, you will be in a wonderful honeymoon period (~6 months) where literally no question you ask regardless of how basic or inane will be counted against you. Stuck on something for more than an hour? Ask. Want sparkling soda instead of those gross energy drinks? Etc. Btw, after 6 months you will be expected to know at least some stuff so get those stupid questions in early.
You are not going to know everything the first day, and nobody will expect you to do so. Ask for help, people will understand you are new and have no idea as yet 'how things are done here'. Be as helpful as you can be, try and keep out of office politics, and good luck!
Kick someone's ass the first day.
Take down the biggest one, and the others will respect you.
I'm starting a new tech job next week and I'm terrified too, even though I've been doing this for 20 years. Be a person, do good work, and don't be afraid to take criticism. Also, the fact that you're here puts you at an advantage: you can ask us things like "is this normal" or "what am I doing wrong here" and you'll get a bunch of answers.
You got this!
I was in the same boat pretty recently, and have (so far) managed to make good on it. Word of caution that every job, company and person are different, so there's no guarantee that all of this will apply to you, ways that might be helpful to think:
1. Most professional jobs expect a modest acclimatization period in which you're still learning your way around the job and the business. Don't stress out if you can't do everything at once, and don't be afraid to go to your supervisor or coworkers to ask question or request help.
2. Make sure to be respectful of whoever your immediate supervisor is. I don't mean "respectful" in the sense of bowing and scraping, I mean it in the sense of giving due consideration to how the things you do impact them. Things like being upfront with them about problems or delays that could reflect on them, keeping them in the loop if you have important discussions/collaborations that they're not party to, and not complaining about them to others in the org (even if they deserve it). If they're a good boss, this should just be basic decency. If they're not, think of it a set of survival skills: whatever they're faults, you'll still be better off it you don't make them mad at you.
3. Beyond 1 and 2, just generally try to do good work. In a good, well-functioning organization, good work will be noticed and rewarded. In a more dysfunctional organization it may not be rewarded directly, but developing the habits and skills of good work will make it easier to move to a better job, especially if you can produce anything tangible (like code in a portfolio) that demonstrate it.
4. Keep on eye on the exit, even if you'd prefer not to have to use it (this feels weird to write because I really like my current job and employer, and hope to stay there for many years). Every job is a business arrangement and you should generally expect that your employer will be perfectly willing to end it if that's what's best for them. So you should keep some of the same attitude. My understanding is that working for even a few months at a tech job will make you considerably more employable elsewhere, and you will likely be improving your skills quite a bit at first. Even if you don't intend to apply to anything, try to keep an up-to-date resume/CV and take at least occasional glances at relevant jobs postings[1] to have a sense of what your options are and how difficult it would be to pull up stakes if you need to. Also, psychologically, keeping in mind that you *can* leave can also help take some of the edge off stressful situations, and its easier to do good work if you aren't stressed.
[1] But *don't* do it or mention it at work. Lots of people are reasonable about it, but some employers will take it the wrong way.
What a beautiful milestone. A few things to keep in mind:
1. Nobody there knows everything. But even the CEO. All companies are collections of incomplete knowledge and inefficient systems.
2. Always do two things:
(a) your core job: what’s on your job description or your contract. You will be asked to do a multitude of things outside of that, and you’ll do them (because you want to be a good colleague, and because you want to learn, and because you don’t yet know what you’re going to be good at) but don’t let all those other tasks distract you from getting your core job done.
(b) pay attention to where you can add value over and above your core job. You’ll get some credit for doing exactly what you were hired to do, but you’ll get more for seeing things you weren’t hired to do and doing them too.
3. Yes, it’s scary, because you’re going to get paid next month and the month after that and the month after that, and you’ll be afraid that you’re not earning that money, but know that every day you’re learning things (even things that seem tiny and insignificant) that are valuable to your current employer – and especially valuable to another employer further down the road.
4. Jobs, gigs, projects, opportunities come and go all the time. You don’t know where the next one is going to come from, but I promise you it’s going to come.
5. You will face situations that you hate, that you aren’t sure you can manage, and you will wonder how you’re going to get past them. And then they’ll be gone, and something else will replace them, and everyone (including you) will forget about the thing that seemed insurmountable.
Good luck!
Don't post how you spend your day on social media.
Just remember that everybody else there also had a first day, once.
Just remember that the vast majority of people are not actually as good at what they do as they originally seem. Take pride in fully comprehending your responsibilities, take ownership of them, *just do things* instead of timidly wondering “what if?”, and you’ll be better than 75% of workers. Once you realize you are fully capable of doing nearly anything at all with a good ol’ college try you’ll start doing even more, and then you’ll be better than 95% of workers.
I'm in a similar boat; today was my first day, actually. I think the biggest idea to keep in mind is the fact that you were hired by your employer. They know that you're new to the game, they've factored your inexperience into their calculations, and they still decided it's worth it to hire you, because you'll learn and grow as they show you what your job consists of and what they want from you. Figure out the actual boundaries you have at work with your supervisor and your team, find out what your best resources to learn are (people or otherwise), do your best with what they give you, try not to delete the repo and all the backups (or other mistakes of that magnitude), and you should be fine.
Sleep Sheep Review:
I signed up for Sleep Sheep the day Scott posted the link to an open thread. I was pretty desperate, feeling dysphoria from a number of particularly terrible nights’ sleep. What follows is my review as best I can craft it with a brain that is rationing sleep opportunity time.
The best thing about Sleep Sheep is that I’m doing it. I’d known about CBT-I for some time. I’d known it was the gold standard, that it involved rationing sleep, and I’d vaguely even planned August-October as the time to do it since my wife is on a series of vacations, and it’s easier for me to tinker with lifestyle choices while she’s away. I’d nonetheless remained fairly horrified at the prospect of rationing sleep given how horrible sleep deprivation has been to my life, and I’d been reluctant to pull the trigger.
Turns out horror like mine is common, and it feeds into the insomnia cycle. Awoken at night for whatever reason and impatient to fall asleep, I fixated on untold horrors that may happen the next day due to my inability to fall asleep. These foretellings compounded the problem. I’d noticed this connection, and begun to notice certain days turning out pretty ok, even excellent sometimes. I’d even begun debugging my emotional response, but this process was iterative and piecemeal, one enterprise within a life of many without a coherent plan and often set aside where other demands become urgent.
What I appreciate about Sleep-Sheep was it provided a simple, socially grounded structure to keep me persistently working towards consistent, quality sleep: log my sleep, follow the advice of an AI sheep, and meet once a week with my sleep coach. I stated at the beginning I would make a good faith effort to work with the program, and I would have felt embarrassed going into a weekly meeting with Luomei having shirked my objectively not that hard responsibilities. Meanwhile, the prospect of asking for a refund if the program failed because I hadn’t done the bare minimum felt horrifying. Squeezed between twin shame driven imperatives, I charged forward into the sleep deprived abyss.
And the abyss was… not that bad? Day by day, I muddled through, generating disconfirming evidence to the catastrophic foretellings disrupting my sleep in the middle of the night. My sleep wasn’t great, but it offered predictability. I could mostly plan a life around it. Going through the first couple weeks was difficult, but there was a frontier of incremental progress. In fitness, meditation, and clinical practice I have grown to love the philosophy of progressive overload, so the notion that a brighter future built on tolerance of present suffering slotted into a well developed philosophical infrastructure.
There is an archetype to someone who does not need the structure and support Sleep Sheep provides. A friend from practicum noticed a few months back she was having difficulty sleeping, implemented a CBT-I protocol, and moved on to living her best life. Said friend also had a successful career as a military dentist, recovered from a horrible muscular injury, and balanced a mildly rigorous MFT program alongside fulltime work. She makes frequent use of spreadsheets.
I enjoy thinking of myself as made from that mold, but past experience demonstrates I am not, and I prefer the consequences of realistic planning. So I pay $300 a month, Luomei gives inspiring pep talks once a week, and an AI sheep gives me psychoeducation as we structure my CBT-I program. Sometimes I backburner sleep improvement, and I don’t worry about it because Sleep Sheep will get me back on track in, at most, a weeks’ time. I am experiencing incremental improvements in a life domain that has strained my marriage and rootbound my career.
I think many people are more like me than my high conscientiousness friend. We prefer seeing ourselves as able to do the simple thing, yet, empirically, we don’t. It feels stupid to pay an exorbitant sum to a specialist to hold our hands, so we starve valuable self improvement avenues of investment—not because they lack expected value but because the pathway to that value is aesthetically unappealing. Paid help can bypass anticipatory anhedonia, allowing better lives of more frequent exercise, emotional regulation, and, in my case, better sleep. But paying for “simple” feels weird and shameful, so we don’t. For people like that who have difficulty sleeping, I recommend Sleep Sheep.
A joke, a joke.
This is encouraging on many levels, not just sleep. - “A brighter future built on tolerance of present suffering “
Dude, I so wanted to loop general argument for using paid services to overcome the anticipatory anhedonia that impedes progressive overload across many life domains, but, alas, I couldn't fit it in.
That’s a lot to to stuff into a mattress review 🙂
Oh, no, this is the CBT-I app scott linked a month back: https://www.gnsheep.com
Trump's tactic of suing companies for damages, and then settling for a few tens of millions of dollars – YouTube just settled for $25M today – seems to be a very effective way of legally accepting bribes out in the open. This seems innovative enough I wonder if this is going to become a standard strategy, both in the US and worldwide; I think most countries have a legal mechanism for settling suits out of court.
No because it's too hard to execute the litigable action with plausible deniability. These suits are all downstream of things that happened on Jan 6. That's too indirect to be useful. No company is going to initiate a bribe by attacking a politician who has a 20% chance of being elected again in 4 years. Cool idea though.
Unless the settlement is in the form a a suitcase full of cash or a chest of Spanish bullion or a canvas bag clearly labeled "SWAG" . . . . personally delivered to Donald Trump, it's not bribery.
It's a tax that gets plopped into the general fund and squandered on Something Useless (tm) however you personally define "Useless."
Naturally I only support Federal programs that Any Reasonable Man considers "Essential," but All Those Other People constantly advocate for useless and wasteful spending.
Sad thing is that it's an irrelevant amount of money to both Alphabet And the US Treasury.
YouTube's settlement seems earmarked towards building the White House Ballroom, not the general fund of the US Treasury.
And I missed the fact that it’s a Personal lawsuit vs “Donald Trump” rather than vs “President Trump.”
Look into SLAPP suits and anti-SLAPP laws
You might have missed the point. I agree that the companies he's suing would win in court, and the expense of the trial isn't significant to them. Neither is the money they're settling for: this is a way to legitimize a mutually beneficial transaction.
> Trump's tactic of suing companies for damages, and then settling for a few tens of millions of dollars – YouTube just settled for $25M today – seems to be a very effective way of legally accepting bribes out in the open. This seems innovative enough I wonder if this is going to become a standard strategy,
I'm not sure it is innovative. Just as the Texas anti-abortion law was described as "innovative" for copying the longstanding method that civil rights laws used to ignore the first amendment, this approach sounds a lot like the longstanding agency method of soliciting a lawsuit from an ideologically aligned group and then settling the lawsuit with an agreement to do something that they wouldn't have had the power to do if it hadn't been part of a legal settlement.
This sounds pretty interesting. Can you elaborate with examples?
I expect that a lot of the effectiveness comes from everyone still treating Trump as an aberration. A one-time payment of $25 M is chump change to a company the size of YouTube, and more than worth the cost if it's all they need to do (even if it's multiple times) to weather the storm and wait for sanity to return.
If it started to look like other government officials were going to follow suit, suddenly companies would be incentivized to fight much harder. They keep armies of lawyers on retainer anyway. In a world where non-Trump officials feel free to do the same, fighting some drawn-out legal battles is enormously preferable to signalling that you're open to paying Danegeld to anyone with the power to ask for it.
I agree it's a trifling amount for any of these companies. I think this is "protection money" in a sense more real than simply as a euphemism for extortion: I think the implicit agreement is that Trump, in addition to not attacking them, keeps others with the power to hurt them – I'm thinking primarily of federal regulatory agencies here, but should also include non-Trump officials – from doing so.
"Caught" implies a crime was committed, and I don't understand what crime Youtube could have committed by banning the President's account. Surely Trump isn't claiming that the government should compel Youtube to carry his speech, right?
You might be missing the wood for the trees: that explanation seems narrowly tailored for YouTube and you'd need a different one for the other companies he's doing this to/with.
> you'd have to actually discuss them
When I said you'd need a different explanation for each company, I wasn't asking you to come up with them. On the contrary, I meant that you needed a single explanation that works for them all without needing a list.
A lot depends on the response. If Dems win in 2029 and investigate each such case for bribery, then it will probably be shut down. If not then it just becomes an accepted tactic for anyone brazen enough to do it.
Supreme court has made it harder both by narrowing the definition of bribery, and saying the president is immune from prosecution (the obvious tactic would be to tell these CEOs that you'll throw the book at them unless they flip and testify against the president, who also might pardon them). There might be some legal hardball a future administration could play to get around it, but I don't know if they would.
Guy's a cornucopia of ideas of that kind.
Deserving of the Nobel (Memorial) Prize in Economics.
I'm a 33 year old man that is exactly 5 feet tall. I'm thinking about getting on a dating app because, realistically, that's the best way that I'll be able to meet people to date. I'm dreading it though because I've heard that it's brutal for short guys, and I'm way on the end of the spectrum. I vaguely remember some study that was done where upwards of 95% of women wouldn't date someone my height. I don't even want to google to find the study because I remember all the info I could find on it before being pretty bleak.
I'm worried that a dating app will destroy the meager amount of self-esteem that I do have right now. How should I prepare myself for this? Am I overly worried?
I can’t give you any specific advice, but I will say approach it in the spirit of experimentation, keep notes, and optimize like hell.
End of the day: most women are attracted to whatever says "successful mate" to them, which probably can mean "physically capable", and by extension, "tall", but also means a great many other things, which can easily compensate for "not tall". Since you can't help not-tall, make sure the other things work.
Christina's advice about looking like an expert on something is sound. The more, the better, so either be good at lots of things or _really_ good at a few. So is her advice on looking like you care about how you look, which I take to really mean that you should care about taking care of yourself physically. So, chances are, you can look healthy (exercise + diet). This demonstrates self-discipline; everyone likes someone who's self-disciplined.
If you're concerned enough about self-esteem to bring it up, then that might be your first challenge. You're probably likely to get a lot of improvement if you can express confidence in yourself. If you're there, you're in really good shape.
I'm reminded of a fandom convention I went to years ago. That evening, I'm walking along, and I notice a crowd of guys trying to talk up a chum of theirs who'd just turned 25 and was still single. I keep going (to the men's room), wash up, head back out, they're still trying to boost him, telling him not to be worried. I lean slightly to the side as I pass by and say, "they're right you know". Then I turn around, thumbs at myself, "40", and give my best "I'm not worried either" pose. One of the boosters yells out "you are my hero!!". I grin, salute Mr. 25, and keep walking. They didn't have to know that the woman I was crushing on for several months mentioned that day that she was interested in another guy.
Today, I live with my GF now, steady as a rock.
There are _some_ women who like a guy with self-esteem issues (it comes off as vulnerability, and they're looking for someone they can nurse), but confidence probably puts you in a larger pool. Plus, it's probably healthier independent of whether you meet a nice gal.
Make sure some of your photos on the app show you expertly doing *something.*
Most women who are smart enough to be meaningfully attracted to someone smart enough to be an ACX reader/commenter are going to be profoundly attracted to the confidence which comes with competence in some skill that matters to others.
So if you play a musical instrument, double down on practice *and performance.* If you cook, get better at plating. Play a sport? Make sure it looks like you love it. If you're interested in topping in kink, devote yourself to learning rope bondage, especially rigging, and go to lots of public workshops (I was friends with a 5'2" older guy who developed a really amazing skillset in electrical play, and he had many kinky women following him around despite looking NOTHING like a conventional porn star). Etc.
The exception to this advice about highlighting a skill is video gaming. While there may be some women who deeply admire expert gaming, in reality, no there aren't.
You're a guy, with presumably normal male biological impulses, so you may be tempted to dismiss this advice because you're used to human attraction being ovwrwhelmingly visually based. Please try to ignore your own experience and take this on faith: Women are indeed different than men when it comes to attraction. For the smarter ones, expertise is as attractive as a long body and square jaw. Formula 1 drivers are small men who don't lack for women's attention, as are gymnasts and jockeys and musicians and and and...
You don't need to be famous, of course.
You just need to be objectively good enough at something to legitimately earn the confidence of being good at something.
Women will notice if you show that to them.
>The exception to this advice about highlighting a skill is video gaming.
I would also recommend against a profile picture showing you fishing. "Man posing with fish" profile pics seem to be enough of a cliche to inspire eye-rolling among women of my acquaintance who use dating apps. My social circles (which lean very nerdy and very blue-tribe) are probably culturally less likely to consider fishing an appealing hobby, but even among women who do find fishermen alluring, I expect the market is saturated.
Yeah, agreed, fishing may not be a great look unless it's, like, a marlin or tuna or something that required tremendous skill.
And even then, maybe not.
I dunno Christina, this guy was married 4 times. Fish catch pix didn’t hurt him a bit with the ladies.
He did die in a place called Ketchum though so they might have been a bad omen.
https://1.bp.blogspot.com/-eNEDkuAszfg/YPhlemQiNLI/AAAAAAAEKHE/O9MQGRQp84UtsUgzeIK-CsT3DdiBjA_fQCLcBGAsYHQ/s1000/ernest-hemingway-big-fish-3.jpg
It’s a little known fact that the postal service didn’t actually have any rules against their employees dating women, it just worked out that way for me.
If it makes you feel any better, Robert Reich is 4'11." But he seems to have a sense of humor about his height (whether he did at your age, I don't know). So, maybe make light of it in your profile?
I completely understand that it hurts to be rejected/not-selected regardless of the reason: if you can develop the mental habit of only "counting" rejections for appropriate reasons (difficult, I know!) this might make it less distressing and easier to bear?
For example, if some woman rejects you but she's the sort of person you wouldn't want to date anyway, that's a great example of the sort of rejection that shouldn't count (if anything, this should count as a bonus - this woman you wouldn't want to date is doing the work -of removing herself from your dating pool- for you!)
If this system makes sense to you, I suggest that "I'm the sort of person who filters based on height rather than on personality, shared interests, even on looks/vibe more generally" would be an excellent candidate for the sort of person you probably wouldn't want to date anyway?
If the average match rate for a 6' guy is 1 per week (made this number up out of thin air as I've no idea really! The general point should hopefully be the same if you plug in different numbers) and the average match rate for a 5' guy is 95% lower (but it's cool: this 95% is largely made-up of women whom you mostly wouldn't want to date anyway!) then your match rate would be 1 per 5 months:
A) This isn't to be sniffed-at! Far more than if you don't join in the first place; if the first match turns out to be perfect that's a total of 5 months' wait to meet your soulmate which seems like a great deal; if the first match isn't perfect this is still good and healthy, see Scott's writing on micromarriges for more info: https://www.astralcodexten.com/p/theres-a-time-for-everyone
B) If you get more than 1 match every 5 months - you're doing better than your expected baseline, your other qualities are so attractive that *even some of the "I don't match with short guys" women are matching with you*(!), and you should actually feel very good about yourself!
Hope this helps - and good luck!
If someone would reject you for your height, you would not enjoy dating them.
https://www.youtube.com/watch?v=OgIBuOiigSo
Jordan Harbinger, who is moderately short ("5' 10" with shoes on") says that women don't care about height as much as some say they do.
Women who are with a man shorter than they are will lose the ability to see the height difference. Women chose men who make them feel good.
Or maybe you could say that they care about perceived height, and will perceive what they need to perceive.
I haven't confirmed this, but I do find it interesting.
I struggle to understand how you can describe someone as "moderately short" when they are above the average male height in the US (5' 9"). If I were the OP I would be unable to relate that comment to my situation and indeed would probably find it quite offensive.
It’s the distorted world of the Internet, where leading ladies are “mid” and normal guys are short because everyone’s reference points are online celebrities or even AI generated.
I know a man who’s your height. Has been happily married for a couple decades to a petite, very pretty, Asian woman.
You might benefit from telling yourself that self-esteem is a decadent concept that is eroding the foundations of Western civilization - it does seem to be holding you back.
I would advise you to treat your situation like an emergency and sign up ASAP. Take time off work if necessary. Sign up for as many as possible and ask yourself if you're willing to date fat women/single mothers/etc. There are even dating apps especially for fat people.
What's your race? Ngl, it will be pretty brutal so be mentally prepared. Prepare for the worst, and even hope for the worst honestly. It sucks but you have to play with the cards you have been dealt with unfortunately.
I'm white. The worst case scenario is getting no matches and a small number of mean spirited messages. I'm thinking that cruel messages aren't super likely but no matches is realistic.
That's pretty good advantage to have in the dating market. You have an ok chance with East/Southeast Asian immigrant women I'd say. Get premium version of 2 dating apps at a time and swipe somewhat selectively for a few months. Watch out for scammers, you may make mistakes here and there at first(asking out too soon, asking out not soon enough haha) but with time you should get the hang of it. Have a decent bio(job if its something that pays well + 2-3 hobbies). Look online into examples of bios and fine tune yours accordingly. 3-4 good photos(no sun glasses, with different clothes at different locations). Also remember most guys get very few matches to begin with, with you it will be even slower, but its a grind, and if it works out it will be worth it. Don't spend more than 5 minutes on dating apps per day. Be disciplined about that. If after a few months, you feel like its hitting your self-esteem, just delete the apps for a few months and focus on something else. Come back to it when you are ready,
Seconding some of the advice on the photos, and I'm going to escalate with advising looking into professional portrait photography. I wouldn't go so far as to do a formal corporate or actor-style headshots, but rather "candid" (those quotes are doing a lot of heavy lifting) portraiture by a real expert who knows how to make a person's face a story.
Everyone with a phone thinks they're a photographer, but someone good enough to charge money will catch your face looking compelling, even if you're "ugly."
I don't think professional portrait is going to work. It comes across as trying too hard and again girls see through these things. His face is not the issue either. He is probably attractive enough, its just the height thats the issue, which professional photography won't help. Professional photography will help if you have an ugly face but average height. But again no harm in trying different things I suppose.
Hey, you know what "girls" like?
Men who care *enough* about what they look like, which is to say, care at *all.* This covers the pretty basic stuff of having good hygiene and basic grooming, dressing in clean, well-maintained clothing which fits their body, and so on.
A photo which communicates "I care enough about making a good impression to share this excellent photo" conveys all of the above, plus more.
Most women (not "girls," the OP is 33!) are not going to care where a compelling photo of a man came from. A "candid" professional photo of the OP flambeing a crepe or etc can be explained as a "candid" shot someone took during a dinner party.
It's not that hard.
I mean, yeah, look at the photographer's portfolio. If they aren't good, don't hire them.
I'm more saying that photos by an established pro portrait photographer are *far* more likely to be appealing than random selfies.
I'm pretty skeptical of the claim that apps are the best way to meet people to date. I don't have skin in the game, so my opinions are admittedly perhaps armchair, but maximizing face-to-face interaction especially in disproportionately female spaces always made more sense to me for the meeting people part.
That works if you are already attractive or have insane charisma. There's a limit to how many people you can meet in real life. But with dating apps, if you live in a large city your pool is in hundreds of thousands. I am quite ugly guy and the few dates I have been through have only been possible due to dating apps. If I lived in the 90s where I had to meet girls in clubs or something, I would have been cooked haha.
Damn. If you're right I've been giving people terrible advice. Nonetheless, I think charisma is pretty learnable? Like, I suspect that if someone read a book on therapeutic microskills, and did loving kindness meditation for somewhere between fifteen and sixty minutes a day they'd have the basis for iterative improvement? I am myself fairly charismatic and reasonably attractive though, so I always wonder if I'm just full of shit on this subject.
What book(s) specifically do you recommend for improving charisma?
That is a very fair question. Although it would behoove me to have an answer given how strong my opinions on this subject are, I have never actually assembled a curriculum or anything. If you dm me I'll give you my number, and I'd be happy to chat about the subject especially consider everything below is pretty ad hoc.
Back to your request. think my suggestion would be some mix of microskills books, meditation books, and therapy books. Your goal here is to be able to feel ease and love so that you can express that reaction to others, sincerely enjoying their company. For the microskills book I read Intentional Interviewing and Counseling by Ivey and Ivey. It's not, like, great, and if someone knew a good book on therapeutic microskills I'd be really interested, but it covers encouragers, reflecting, and and summarizing which are really what you want.
For meditation books, what you're going for is loving kindness. If you can cultivate an intense experience of love, you'll get the ability to convey it to others, which will make you really enjoyable to be around. I really like I'm Right You're Wrong by Ajahn Amaro and Broad View Boundless Heart by Ajahn Amaro and Ajahn Pasanno. The authors are Buddhist monks, so if you prefer something secular I think Judson Brewer has guided meditations, and Sam Harris might cover loving kindness in Waking Up?
What therapy books are good kinda depend your personal taste and what's difficult for you. Generally, what you're going for is excellent self esteem, non anxiety, and communication skills. Grading by enjoyability of writing and and usefulness of content, Judson Brewer and David Burns both wrote good books. The New Peoplemaking is also great if you have difficulty with your family of origin.
"How to Win Friends and Influence People" is trite, cliched, and pretty much what it says on the label. Not a bad place to start, at least.
I, uh, really should read that.
You can learn enough charisma to get on with day to day life(work, school etc.) but you are not going to be charismatic enough to pull girls while being conventionally not attractive. That kind of charisma is natural. Girls are not dumb either. They are insanely perceptive and can sense you are just following the steps from the How to be Charismatic guide. There will always be a small number of girls who like a specific type of guy(short, autistic, ugly, goofy, awkward) whatever. They themselves might not fit into any of the above categories but for whatever reason they like them. Our best shot is to cast a wide net on the dating apps and try to find them. But the process will be absolutely brutal and there's always a good chance it might not even work out. But again life ain't fair.
Hmm. I think there's truth to what you're saying. Like, I have a friend who seems to work from a playbook and sometimes he comes across false. Meanwhile, my experience has been mediated by being conventionally at least mildly attractive. Still, for me across the arc of my life, learning charisma has _felt_ like a skill. Like, I have over time a more coherent notion of how to befriend someone, or improve someone's day, or ask a store employee to do me a favor. My friend who read the books too, he genuinely can carry a conversation with a stranger better than most.
Do you apply your suggestion to middling attractive people? I haven't actually advised anyone who is _un_attractive. But a lot of people who have strengths seem to obsess over their deficits. While not contradicting anything you said, I remain confident a big chunk of attractiveness/charisma is intrapersonal and learnable.
Honestly no harm in trying. It's cliche but everyone's different. There's a million subtle different things about us physically and behaviorally which is perceived subtly differently by everyone else as well.
I will take back what I said before, and advice anyone reading this to just try different things out. See what works and on whom? Maybe the steps from the guide to Charisma might just work out for you. Even if it doesn't work on girl 1, it might work on girl 2, who knows? But you still will be interacting with people and that alone goes a long ways towards improving your social skills.
Yeah, I think I imagine human relationships as being fairly, well, Pavlovian. Like, if you interact with someone and bring them delight, it's easy to imagine wanting to interact with you more. If you sincerely enjoy the exchange, it's much easier to put forward signs of interest since it's very low stakes to refuse you (you enjoy them regardless). Having female friends is a good way to get introduced to eligible women, and statistically some quantity of people are in fact single and interested. I suppose my model supposes that it is possible to learn to feel and spread joy, which not everyone agrees with.
Per your points though, it's super great to do stuff with people because as you say it's a fun way to connect—plus if there isn't romantic compatibility, you still had fun hiking or whatever!
Bruh alright. As someone else said if you put your height up front hopefully nobody will match with you and send hurtful messages.
That being said there is a real grind and soul-sucking aspect of it. For guys of average height it's already madness. Even I thought I was desensitized but the constant bs just hurts.
That's your greatest risk. The grind. If you can withstand that then maybe you'll make it out.
I'll note that this rate varies wildly between apps.
I'm skeptical of the 95% figure. I haven't heard it before, and when I went looking for it just now the search results sound like internet folklore based vaguely on some survey that someone once did about how many women are interested in dating men who are shorter than they are. The impression I get is that among straight+bi women, a small but vocal minority have a strong preference for dating taller men, most have a mild preference for taller men but it isn't a dealbreaker if you're attractive to them in other respects, a nontrivial fraction don't care at all, and a few probably have an active preference for shorter men.
Even if the 95% figure is accurate, five percent of the millions of women using Tinder or whatever is still a rather large number of people in absolute terms. Some fraction of these won't be interested in you for other reasons, or you won't be interested in them, or both, but you only need to find one good match for the search to be worthwhile.
> I'm skeptical of the 95% figure.
It's a pretty good starting point, because women generally only swipe on ~5% of men to begin with, so you're necessarily going to be addressing a small pool:
This is from Tinder data, affirming the 5% figure
https://imgur.com/H5oXiUZ
Formerly, in the age of actual websites, it was about ~20%, and declined with the rise of swiping apps / Match Group.
The following two are from Golden-age OKC data and show the male attractiveness rating and drop offs:
How men and women respectively rate each other on golden age dating website Okc:
https://imgur.com/a/gp2FZJy
Female "likes" by male attractiveness
https://imgur.com/SvVmu1F
Messages by male / female attractiveness
https://imgur.com/IpHyymX
And a direct height data point "is much shorter than you" by college / non-college women:
https://imgur.com/wN4v0Mj
And percent of women setting height filter on Bumble:
https://imgur.com/a/gl6Kt0M
I don't think the overall swipe rate is a good proxy for height filtering, since women make that decision for any number of reasons. Some of those reasons are presumably highly correlated between different women (conventional attractiveness, decently-written profile, etc), while others are less correlated (cultural signifiers, shared interests mentioned in profile, reminds her of an obnoxious ex, etc).
The "is much shorter than you" chart is consistent with my expectation that a many women prefer taller man to varying extents, but many care little or not at all about height, and it's one factor among many for at least some of the women who do care about it. Note that by that chart, 36% of college-educated women and 52% of non-college-educated women self-report not caring about men being much shorter than they are.
I'm not 100% sure how to read the Bumble chart without more context. Based on the range of the Y axis, I suspect it's saying that among women **who set height filters**, Y% of those filters include men who are X height. If that's what it means, then it doesn't really tell us how many women set height filters at all. It does tell us that an awful lot of women who do care about height aren't interested in dating professional basketball players or men who have to duck when going through doorways. It also tells us that minimum height filters are set at a variety of levels, many to only include conspicuously tall men, some to exclude men of below-average height, and some to exclude men who are at what is presumably the filterer's own height give or take a few inches.
This sounds right to me. I'm a tall woman, happily married for almost 30 years to a man much shorter than I am, and I could not give less of a damn about his height / our height difference. He makes me feel awesome and is truly my better half. And, as my grandma once said "they're all the same height when they're lying down" ;)
This made me feel a lot better. Thank you!
Glad I could help!
“Even if the 95% figure is accurate, five percent of the millions of women using Tinder or whatever is still a rather large number of people in absolute terms.“
Exactly! It’s a numbers & time game, and one has to throw their bait in the water where the fish are and wait. Just cuz a fish bites and you don’t land it doesn’t mean a fish won’t come along later!
If it makes you feel better, I estimate that if you're otherwise about average, upwards of 95% of women you "swipe right" on aren't going to date you anyway.
Hmmm that doesn't make me feel better at all lol
Just put your height on the app so it’s visible. Anyone who matches/chats with you will be aware of your height and is unlikely to be a dick. I thinks it’s probably a great way to meet girls that are interested!
I'm definitely going to be upfront about it. I think one of the pictures I'll upload will make it obvious that I'm smaller while (hopefully) still being flattering.
Good luck! And remember… you have a pretty awesome camera in your pocket than can provide solid photos. I made a homemade little camera stand for my phone to take some good pictures of myself and it made a noticeable difference for the amount of likes I get. As guys we don't often get good pictures of ourselves taken out in the wild… it’s not necessarily “cool” to set-up your own little photo studio for your dating profile but if it can be worth it if they’re staged well.
At least on Hinge, you don’t have the option to leave your height off
The book "Homo Carnivorus" settles the "is meat healthy or not?" question for me. It makes a strong and substantiated case for why meat is (probably) healthy. If anyone cares about that, I would recommend you read that book. It basically got me to stop wondering and worrying whether what I eat is beneficial to my health or slowly killing me.
If you're on the fence about attending a meetup: I strongly recommend going! I'd never been to an ACX meetup before, and I was worried I wouldn't fit in or it would be awkward, but everyone was super welcoming and I had a great time.
I have seen several blogs whose authors do not use any capital letters. This is clearly a deliberate choice, as the authors are skilled in writing. Does anyone know what the meaning / intent behind this decision is?
It’s the equivalent of calling yourself a smol bean.
I've heard it called 'lapslock' and to people from certain social media spaces it reads as an informal, casual tone, a bit laconic. Having spent time in places where this and other idiosyncratic punctuation is common, I do indeed get cues about emotional tone from where a poster uses or omits punctuation, and it can be used for humorous purposes. Think "What?" versus "WHAT" versus "what."
Reading long form text in lapslock is rather obnoxious though.
It’s a little-known fact that while there are a very large number of capital letters out there, their numbers are finite. With their unprecedented and indiscriminate use in Truth Social posts, their TFR has fallen below replacement levels.
I suspect that these writers are concerned citizens drawing a lesson from the decimation of the vast herds of American Buffalo and are simply trying to preserve them from extinction.
TYPICAL "PEAK CAPITAL" PROPAGANDA. IF CAPITAL LETTERS WERE TRULY RUNNING OUT, WHY HASN'T THE COST OF PRODUCING THEM INCREASED? WE KEEP FINDING NEW RESERVOIRS ALL THE TIME, AND MOST COUNTRIES HOLD VAST STRATEGIC RESERVES TOO. I, FOR ONE, GREW UP WITH A DIZZYING ARRAY OF CAPITAL LETTERS, REFINED INTO ALL SORTS OF FONTS AND ITALICS, AND TO STOP USING THEM OUT OF SOME MISGUIDED SENSE OF DO-GOODERY WOULD BE TO ROB OUR KIDS OF THEIR FUTURE! I'LL BE ROLLING CAPITALS FOREVER, JUST TO SHOW YOU!
thank you for your attention to this matter!
Im not totally sure why it originated, but I think it’s nearly always a feature of girlblogging on tumblr, or that stems for a tradition of girlblogging that began on tumblr
I have only heard of it, in a modern context, as being an annoying, stupid thing that Sam Altman does.
So it's definitely not just a tradition of girlblogging from tumblr.
I don't know what it means, but I can't read it and I wish they'd stop.
I can think of a few possibilities:
1. Author is a fan of e.e. cummings
2. Author is an eagle typing with the talons of one foot while perching with the other foot
3. Author's shift key and caps lock key are both broken.
4. Author is typing on a phone and finds the extra tap required to make a capital letter cumbersome
5. Author is of an age bracket where text/chat speak is the standard casual register for written communication.
6. If all-caps are shouting, then all-lower-case is whispering, and the author wants to use this to create a sense of intimacy and sharing secrets with the reader.
Of these, 5 seems the most likely, followed by 1 and 4.
> 4. Author is typing on a phone and finds the extra tap required to make a capital letter cumbersome
They would have had to first disable the feature that automatically capitalizes the first thing you type.
I prefer 2
Go eat some worms birdboy
I think it's supposed to exude a feeling of informality and intimacy, as if you're just exchanging text messages, instead of reading hoity-toity long-form essays in the New Yorker.
It's an easy way to be different without giving offense?
I'm pretty young, and my wife and some friends (especially girls) do this. I think it's an attempt to be demure and understated.
Can you give an example?
My guess is that it makes it feel more stream-of-thought as opposed to polished, so that you see the writing as a look into the author's mind rather than as text that stands on its own.
One example is here: https://fatherkarine.substack.com/p/ugly-girl-manifesto
It's not really my thing, but this person is one of the funniest writers I've discovered on Substack, so I ignore it. I'd also like to chime in with TotallyHuman and say I'm not sure what the actual effect is supposed to be: your guess seems as good as any.
That woman is really funny, yeah.
It strikes me, reading that, that we don’t really need capitals. We do need quotation marks though - there’s a tendency for modern authors to drop that, it will get old fast.
> We do need quotation marks though - there’s a tendency for modern authors to drop that, it will get old fast.
I understand that Old Chinese texts use different verbs for introducing quoted speech. There are no quotation marks, but the distinction is drawn anyway.
You might see a bit of this strategy in English, where "said" might mean anything, but "quoth" can only report quotes.
(Although browsing through the wiktionary citations, "quoth" appears to be able to report indirect speech before the 20th century.)
Hello all bloggers and wannabe bloggers in the "rationality-adjacent" sphere!
You have probably heard about https://www.inkhaven.blog/ -- and if you have not, you might want to click that link and read it right now. Long story short, there will be a 30-day training for aspiring bloggers, where they can receive some wisdom from their more experienced colleagues (including Scott Alexander), and in turn they are required to post 1 article on each of those 30 days, or be kicked out of the camp. Publish or perish! You need to be actually at that place, for the entire month.
And by the way, the deadline to apply is tomorrow. (Not sure if there is still any place left.)
If you are like me, you are probably complaining about the cruel fate that doesn't let you take a month of vacation exclusively for your hobby. (And if you have the time, you are probably unemployed and don't have the money.) Even if we skipped the part where you have to be there in person, and allowed online participation, there is probably no way you could write 1 article each day. Luckily, there is an alternative.
EDIT:
Read https://www.lesswrong.com/posts/7axYBeo7ai4YozbGa/halfhaven-virtual-blogger-camp for information about the online alternative and how to join it.
It will be half as intense, but twice as long: to produce 30 blog posts within 61 days, during October and November. We will start sooner than the Inkhaven group, and finish at the same time, hopefully with the same output. That means making one post about every two days, approximately; the rules will be much less strict, you won't be kicked out if you don't have anything posted by day 2, but you will be if you don't have anything posted by day 7, because otherwise what's the point.
Languages other than English are allowed; videos are also an acceptable medium; it is not necessary for all 30 posts to be on the same blog; topics are not specified just please don't post anything that would get you banned in an ACX Open Thread. Also, no AI-generated text! If you already have a blog and don't want to ruin it by suddenly writing too much with lower quality, it is okay to create another blog for this purpose. It is okay to publish pseudonymously. There is no reward other than your own feeling of accomplishment, and some peer pressure to perform.
I made a Discord group for this: https://discord.gg/GHkqKHRy
tldr Looking for AI safety events in Bay area in January 2026.
I am a mathematician visiting Stanford University in January 2026. In my spare time I've been trying to work on AI alignment the last couple of years, but feel somewhat isolated. Any suggestions on people to talk to/places to visit while I'm there (events/conferences/bothering-random-researcher activities)? Thank you!
I have a math PhD from Stanford, live nearby, and am interested in AI safety. Let's meet up when you're here.
That would be great, thank you! I will send you an email closer to the date - is @eulercircle.com email address I found on your web page a good way to contact you?
Hi all,
Re-announcing that we're recruiting for the cholesterol:coprostanol study (https://docs.google.com/forms/d/e/1FAIpQLSf_BXwlEJaGxtQVtOpTzLMgpCmzLbA171izWx0EfSBBAKnvOw/viewform). We'd particularly love participants in the Bay Area, so we can do some real-time testing with the probiotic, to look at engraftment and whether serum cholesterol levels change after the species is introduced—but we'll take people from anywhere. Signup is free and participation is easy; let me know if you have any questions!
So in a previous thread someone suggested, that I could mute people on substack and not see their replies. Well I did this and I'm still seeing replies. Do I need to block them? And then what does mute do?
Thanks
yes, try blocking to get rid of their comments. i am confused about this too. I guess "mute" only refers to direct messages and "block" refers to everything.
This is kinda an extension of something I wrote as a reply below...
How sure are we that returns to intelligence are linear or better, especially across all/most areas of life? It seems that ASI predictions rest of an assumption that going from (the equivalent of) IQ X to IQ X+10 provides the same or greater benefit regardless of X and regardless of what you're trying to do with it--that a super-smart entity would be super-persuasive and super-capable in all aspects of life they tried to do--ie that intelligence is a general superpower.
Can someone steelman this assumption?
Because my experience is the reverse--that IQ (intelligence generally) is strongly subject to diminishing returns and behaves a whole lot more logistically. You get great returns going from sub-normal (~80 IQ) to normal (~100) and pretty darn good ones going from normal to "genius" (~120 IQ). But even that latter jump is more narrow than the previous ones. An IQ 80 person isn't going to be very persuasive or capable, and is likely to have lots of other "co-morbidities" (very present oriented, limited ability to really consider other people's experiences, etc) relative to an IQ 100 person. But I haven't seen the same level of increase in anything but sheer academic capability (which often doesn't translate into other fields even of relatively intellectual pursuits) from IQ 120-ish people. In fact, I've often seen *regressions*--people who are really smart often struggle to talk meaningfully to "normal" people and fail to connect to how they see things. Which suggests to me that IQ starts losing a lot of its punch the higher you go. And may actually correlate with *reduced* performance in other aspects of life.
I also don't see very many highly-charismatic super-smart people. Are chess geniuses (all very high IQ) good politicians/people-persuaders? Not that I've seen. Are hard-science Nobel Prize people that much more moral or capable than others? Not that I've seen...many of both sets tend to be cranks and pretty incapable outside of their narrow specialty. Even *within* their broader specialty (e.g. physics), a genius at quantum mechanics isn't better than most smart grad students at, say, general relativity--the skillsets and knowledge base are too different. And they're not good at all at, say, organic chemistry.
My background is in academia (PhD in Computational Quantum Chemistry), but I've also served as a missionary in Eastern Europe, worked with lots of uneducated people as a teacher and with community service, and am currently a programmer at a non-elite smaller company.
https://chillphysicsenjoyer.substack.com/p/youre-a-slow-thinker-now-what/comments
A slow thinker considers how important his lack of speed is, and concludes that slow thinkers and fast thinkers do about equally well in life, and he's arranged his life to have room for him to take the time he needs.
I wonder whether IQ tests select for fast thinkers and might miss out on some slow but good thinkers.
Anecdotal, but when I was tested for the gifted program in 5th grade, my IQ test came back abnormal because I spent so long ensuring correctness on a few sections that it timed out. So it looked really low on some sections and high (not genius) on others.
So I can relate. But then I've also seen that smart people can usually get to an answer faster, so... Not sure.
I've often thought that my particular brand of intelligence (which isn't top tier by any measure, but was enough that I only had to start trying in grad school and I passed the preliminary exam on the first try, which is rare at that school especially since I didn't study at all for it) is more about making connections between things I know and interpolating from a wide base of knowledge than about raw horsepower or creative spark. Generalist vs specialist intelligence, maybe?
I'm not inclined to grant your premise. The most intelligent people I've met or worked with (thinking world-class here) have *without exception* been the people who are best at "being people" -- they likeable, wise, trusted (and trustworthy), have lots of friends, lots of admirers, if a problem (in any field) comes up, then they are the people who *other* people turn to first.
I have little doubt (but see below) that if society were such that reproductive success were primarily determined by that kind of "popularity" (influence or wisdom might be better words), instead of incompetence-at-using-contraception (;-), then they would be the most successful breeders.
One counter-point, though, I've seen more (clinical) depression among them than in wider society. I suspect that to some extent survival as a human depends on being profoundly mistaken about the some aspects of the world -- they most intelligent just don't seem to be that good at being wrong.
Some examples (admittedly, not people I've *met*), Turing (he of the machine) was widely liked (or loved -- non-sexually) was a great uncle, very practical, and an excellent athlete; as I understand it von Neumann had many friends and could not sanely be described as impractical; Wittgenstein (my favourite genius) had many admirers and was much liked (unfortunately poor Ludwig himself wasn't among them), he was working on jet engine design before he switched to philosophy (I'll grant that he was a famously crap teacher...).
Another datum: here in the UK it isn't seen as good to comment on the height of one's own intelligence. It is also not seen as good to comment on how rich you are. The stereotype of the LOMBARD ("Lots Of Money, But A Right Dick") is well-entrenched (and actual Lombards are not rare). There doesn't seem to be parallel concept for intelligence (or many people exemplifying it) -- presumably because the highly intelligent nearly always have the people skills to recognise and obey the social prohibition.
I suspect that the IQ (or whatever) that people have gives them better ability to do *what they want to do*. If, occasionally, someone wants to be a driven, hugely rich, near-sociopath, then intelligence will aid that too.
I present a hypothesis. That intelligence (or rather returns-on-increments-in-intelligence) is not intrinsically asymptomatically limited. Rather, I suspect that the main value(s) of intelligence is effectively navigating to the human world. Just as it's an advantage to be a couple of inches taller than most people, its an advantage to be a bit more intelligent than most people. Just as there's a limit to how tall you can be *and gain more advantage that the costs* (about 6' 3" here in the UK); there's a point where being more intelligent that your peers is essentially pointless (I have no idea what the actual *costs* of being still more intelligent would be -- that's a weakness in my position). If everybody had IQ 200 (in today's IQ points), then the people with IQ 230 (say) would benefit. if the average were IQ 2,000, then the folk (or machines) with IQ 2,300 would be best able to arrange their worlds to their liking.
So, no asymptotic limit, just a cut-off on what's actually valuable in the here-and-now.
We may be (I suspect we are) in the middle of an arms-race (played out over that last few million years and projecting forward to extinction); we *might* have reached some sort of limit-point, but I see no strong evidence of that. We *may* be adding another set of players to the pool; those players *may* be better at whatever-they-want-game-to-be than we are (they certainly will be *if* we can come up with a couple of significant advances in AI (more significant than LLM's -- which I see as, at most, a small component of this hypothetical AI -- perhaps contributing to the UI)
> "Rather, I suspect that the main value(s) of intelligence is effectively navigating to the human world."
I'll sign off on that!
> Can someone steelman this assumption?
It seems fairly obvious to me that there are threshold effects. But set aside any logistic saturation - just look at empirical effects in the world, and you'll see there are outsize returns to higher IQ / ability in general.
Largely, most economic growth, company creation, patents, and technological progress comes from the top 10-20% of people in a given nation, and it gets more concentrated the more you go up. Arguably, something like 60-80% of “progress” is driven by the top 10% or better.
Ivy leaguers are only 0.5% of the people in the US - yet 20/21 presidents in the last 100 years have been Ivy leaguers. 100% of Supreme Court Justices. 41% of Senators, and 20% of House representatives. 50-60% of federal appellate judges, and 30-50% of state Governers and Cabinet members.
But it’s not just Ivy people!
60-70% of patent authors / holders have a graduate degree (usually in STEM fields), and STEM degree holders are 5-10x more likely to hold patents than non-STEM degree holders. Phd’s file 5x more patents per capita than bachelor degree holders.
Yet the percent of the US that has a graduate STEM degree is only 4-5%.
If you look at the unicorns of the last couple decades, the founders are generally Ivy educated, and from wealthy and connected families. Since just the Ivy league is “0.5% or better,” you can see the rough degree of concentration.
In fact, in general, if you look at normalized Rausch IQ scores versus problem difficulty, solving complex problems gets exponentially more difficult the harder the problem, and you need to go further and further out on the IQ and ability curve to even have a chance of finding a solution.
https://imgur.com/a/LRx5J7u
“This means that for the hardest problems, ones that no one has ever solved, the ones that advance civilization, the highest-ability people, the top 1% of 1% are irreplaceable, no one else has a shot. It also means that populations with lower means, even if very numerous, will have super-exponentially less likelihood of solving such questions.”
https://substack.com/@enonh/p-149185059
We can see a similar trend in normalized IQ versus probability of inventing something, and even this likely underestimates it:
https://imgur.com/a/P8BgxDg
Post about this:
https://substack.com/inbox/post/156565032
Hence, progress being driven most by the extremes of the bell curve in ability and IQ and background.
These have all been <<1%-tier people so far. Now extend this out! Sure, this is the tippy top, but think of ANYONE you know who’s filed a patent or started a company or small business, or done something that impacted a lot of people positively. Odds are, they are smarter, more conscientious, more educated, and from wealthier backgrounds than average - and not just by a little, but by so much they’re likely in the top 5-10%.
Overall, you can see this top 5-10% of people are punching FAR above their weight when it comes to economic growth, company creation, patents, and technological progress, and in fact, this tiny slice of humanity is likely driving the overwhelming majority of those things.
The above are from a post I made on high human capital fertility, you can read the whole thing and see the images and links in situ here:
https://performativebafflement.substack.com/p/high-human-capital-fertility-interventions
It's not just about sample size, it's about genes. And to get better genes, you need smart people to have more kids in order to have more chances of producing something exceptional. We haven't reached the peak of human evolution yet.
I'm somewhat (but not totally) skeptical of IQ as a general concept, and I think this illustrates a bit of why:
" An IQ 80 person isn't going to be very persuasive or capable, and is likely to have lots of other "co-morbidities" (very present oriented, limited ability to really consider other people's experiences, etc) relative to an IQ 100 person. "
I remember reading someone--I think it was Nassim Taleb--arguing that most of the spread of human IQ scores was actually cause by pathologies that worsened people's cognitive function, or something like that. And while I suspect he may have overstated the case (shock!), it seems likely to be a significant factor.
Consider, for example, who you would expect to score better on an IQ test: a "naturally" IQ 80 person and a "naturally" IQ 100 person with a severe headache. How about for charisma? I pretty damn sure I'm less personable when I have a headache. Now consider how many physiological maladies might be less obvious and apparent than a headache, but still broadly impact somebody's ability to perform cognitive tasks[1].
If this is sort of thing does play a significant part in determining IQ scores, it would naturally explain a lot of the diminishing returns: the differences at the top end of the scale would largely be differences between "little impairment" and "very, very little impairment." Which might be important when doing very subtle, tricky, sustained bits of thinking like math and science problems, but aren't going to look very different in most other areas of life.
[1] This has been in my thoughts a lot lately, as I've
A. recently had some intermittent health issues that effectively seem to make me dumber when they flare up
and
B. noticed a modest amount of evidence that I might have had these issues for many years, but that they were mostly too subtle to notice (while still being somewhat impairing).
> "Consider, for example, who you would expect to score better on an IQ test: a "naturally" IQ 80 person and a "naturally" IQ 100 person with a severe headache. "
Still the person with the IQ 100 (and especially the person with the IQ 120, or 140), assuming the headache isn't literally physically debilitating.
But more relevantly, I'd generally expect the IQ 100 people to be much better at usefully updating their priors about *anything* under the stress of pain, while the IQ 80 (and especially IQ sub-80) people often don't have the capacity to do that even when they are in the peak of health.
>"[1] This has been in my thoughts a lot lately, as I've
> "A. recently had some intermittent health issues that effectively seem to make me dumber when they flare up "
How many IQ 80 people (for lack of a better term) do you know very well, having interacted with and observed them for a long time?
I'm guessing not many, because you're understandably assuming that an IQ 80 (or lower) person is just a more extreme version of a smart person like you being less-smart under stress. I don't blame you, that's how I used to model low IQ, too, at least until I spent a LOT of time working with IQ 80 (and perhaps below) people and observed over *years* that there are intellectual tools around observation and self-reflection that they simply didn't have and which couldn't be taught.
I've been sitting on a draft of an essay expanding on the comment I made here (https://www.astralcodexten.com/p/open-thread-314/comment/49094023), but it's been a mighty struggle to find a way to explain to people smarter than me that *NO, REALLY*, there are people who are so stupid that smart people can't even model their mental state.
"How many IQ 80 people (for lack of a better term) do you know very well, having interacted with and observed them for a long time? "
The concise and correct answer is "I have no idea, since I don't go around handing out IQ tests." The only person whose IQ I can reasonably claim to know is my own, and in practice I'd have to look up a conversion from an SAT score.
However, while I wasn't thinking about it when I wrote my original reply, I actually have quite a lot of relevant background. I worked for many years as an educator, putting in well over 10,000 hours in some mix of tutoring, TAing and teaching very small classes. If I pared that down to just contact hours, and further paired it down to just hours when I was offering direct help to an individual student[1], I expect the total would still likely exceed 10,000.
Permitting myself some unprincipled guesswork, however, I'd estimate the answer to your question to be at "very likely more than 0, though probably rather less than 10." Without particular effort I can recall names, faces and general dispositions of perhaps five people who met all the following criteria:
1. Appeared to me to have a very difficult time with academics generally and with mathematics (what I was most often teaching) in particular.
2. Had been identified by some outside authority as someone needing fairly intensive and long-term help to progress academically. Usually but not always this meant they had an IEP. Usually but not always they were high-school age.
3. Worked with me often enough and closely enough for me to get a good sense of how quickly they learned things, how well they retained things, how these varied between a typical day, a good day and a bad day and how common each of those three were.
A few observations:
A. For most such students, the difference in progress between a good day and a bad day was extremely pronounced.
B. In some (but not all) cases, there were one or more readily apparent reasons why some days saw less progress than others: for example, one student had regular trouble sleeping, to the point that they would often fall asleep in front of me. "Fall asleep in front of me" days unsurprisingly involved much less progress than "reasonably alert" days.
C. I also worked with a number of quite talented students, some of whom displayed similar tendencies to what I describe in A and B. In fact, I recall one very talented student with very similar sleep troubles.
D. Certainly if you compared good-day to good-day or bad-day to bad-day, the mathematically talented students would certainly outperform the struggling students (obviously). But I would guesstimate that a bad day for one of the talented students would tend see around as much progress as a modestly-above-average day for one of the struggling students.
My sense is that we have some STRONGLY clashing intuitions on some combination of "what IQ 80 means in practice" and "what determines someone's performance on an IQ test."
First, IQ 80 is (to my understanding) low but not abysmally low. It's 1.33 SD below the defined mean, which means (if the distribution is properly normalized to the population) around 9% of people have IQ that low or lower. This means that anyone who isn't a hermit and doesn't live in a bubble that's strongly filtered for IQ[2] should know multiple people of around that level. To my understanding, the threshold to be considered to have an "intellectual disability" is IQ 70[3], which 80 is well above. I'll note that I was NOT trained or qualified to work with people with intellectual disabilities, and would never have been put in a position to. So while I'm not comfortable making specific guesses or estimates about any student I worked with, I am comfortable assuming that they were all cleanly above this threshold. But also Scott discusses here how our perception of what people with lower IQ scores are like is *heavily* distorted by their correlation with other sorts of disabilities[4], which don't necessarily hold as cleanly as we expect:
https://www.astralcodexten.com/p/how-to-stop-worrying-and-learn-to
Second, your life outcomes aren't usually going to care whether you had a bad day when you took an IQ test. But they care *quite a lot* about how well you think and learn on average. I talked above about the day-by-day *progress* of students at different ability levels: and while there could be overlap in the individual days of students at quite different levels, the cumulative effects painted a quite different picture. Even over the course of a few months, the difference in how much one student learned vs another student could be immense. So when you say you think an IQ 100 person[5] would outperform an IQ 80 person even in spite of a headache, I find that VERY hard to believe. Maybe I'm just more susceptible than most, but moderate-to-severe pain (well short of "physically debilitating") sure does hamper my ability to think clearly. It doesn't make me forget things I've already learned well (I expect I'd get all the "gimme" questions on a test nearly as well), but if I'm in significant pain and find myself needing to reason through a novel problem, the FIRST question I will ask myself is "can this wait until I'm not hurting?" My understanding of IQ tests--and maybe I'm off base here--is that they're supposed to depend as little as practical on specific accumulated knowledge or the practice of specific skills.
[1] Which is to say, all one-on-one tutoring, and those portions of teaching and TAing in which I was answering direct questions or providing in-depth guidance to an individual.
[2] Which, to be fair, I think many SSC readers do. I'm pretty sure *I* currently do. I just haven't always, and have a lot of experience outside it.
[3] With the shape of the normal distribution meaning that only a fairly small minority of even the sub-80 people fall below this threshold.
[4] With the obvious and oft-repeated alternate interpretation being that Lynn was just a garbage researcher who took garbage measurements. But that should still significantly raise our skepticism that measurements at the bottom end of the distribution are as useful or intuitive as we believe in general.
[5] Note: perfectly average! Probably not great at any of the skills being tested. With regard to math particularly, I think I also have a pretty clear view of how good the average person isn't.
Okay, so I think I owe you an apology.
When I said, "IQ 80 people (for lack of a better term)" that "better term" I was lacking was a politically correct and/or polite way to say "stupid."
That's why I had the parenthetical there, to signal that this was not really a discussion about the objectively validity of IQ tests per se, but rather a discussion about the kind of people who score low on IQ tests and, more importantly, all other tests. I did get around to using the word "stupid" in my final sentence, but clearly by that point, referencing IQ at all was a tremendous distraction.
When I say "stupid" or "low IQ," I'm thinking about a former coworker who deliberately smoked and drank while she was pregnant and gave birth to a child with fetal alcohol syndrome, and who couldn't comprehend the difference between a mortgage interest schedule and compounding interest on a credit card. I'm talking about a different coworker who simply *could not be made to understand* the difference between a health insurance premium, a co-pay, and a deductible, not even with a written guide in front of him and a patient, point-by-point explanation of each term (he ending up saying, "none of this is fair, I'm not paying any of this bullshit anymore, fuck them!"). I'm talking about a third coworker who very literally couldn't problem-solve through *any* minor deviations to his routine, not because he was frozen with anxiety or whatever, but because the ideas for how to solve minor problems simply didn't rise to consciousness. Whether it was a customer asking him a routine question about a policy, or the coffee-maker clogging and overflowing, or deeply cutting into his hand splitting a bagel, he never knew *what to do.* I learned to get between him and customers to answer questions, and to give him very specific instructions in small batches for everything else.
You and I could do our taxes or job on-boarding paperwork or answer essay test hypothetical questions, even with a very bad headache. These three? Not without help, no matter their level of health.
That third person was the first profoundly "low-IQ" (stupid) person I ever got to know very well. This isn't that surprising, to quote myself from the link I provided:
> "It's not their fault; the people writing comments here are almost universally living in highly intelligent "social bubbles" (https://slatestarcodex.com/2017/10/02/different-worlds/). They tend to have highly intelligent family, seek highly intelligent friends, and end up in careers which expose them to highly intelligent peers. They might not *consider* themselves to be highly intelligent - because they tend to socialize with highly intelligent people, they know people who are even smarter than they are - but nevertheless, they're highly intelligent and everyone they know pretty well is highly intelligent, and thus they instinctively model the minds of *everyone* from this perspective.
> "They can't *fully* model what it's like to be truly stupid; unobservant, incapable of dispassionate self-reflection, unable to accurately predict the consequences of a given action, unable to absorb information, unable to update priors. They can't model what it's like to have an entirely different set of motivational priorities based on an *inability* to think, and that's why so many of their suggestions about how to help and/or manage stupid criminals are ultimately unsuccessful."
I wrote those two paragraphs because they exactly described my experience and what I've observed in those like me. I was so in the "intelligent world" that It took me a six months of working with Third Guy full time before I finally understood that he wasn't willfully discarding good ideas to aggravate me, *he wasn't having them.* It took me another six months to stop resenting his need for constant supervision and protection.
I realize that can sometimes sound implausible to people whose "intelligence worlds" are far more closed than mine is. They either don't really believe in their heart-of-hearts that genuinely stupid people exist, or they only understand it on an abstract, surface level, the way people abstractly understand that foreign cultures are very different from their own but don't actually KNOW that until they travel to and spend time in one.
Hm. Maybe the foreign culture travel metaphor can be useful here. Gotta think more on that.
This whole post deserves a longer response[1], and I intend to write one. But I couldn't let this pass without comment.
"You and I could do our taxes or job on-boarding paperwork or answer essay test hypothetical questions, even with a very bad headache."
Wow. This...this is a sentence for sure. I think this is emblematic of absolutely everything that is wrong with the worldview that is proudly on display here. So let me say this with complete clarity.
No. No I could not. No I could not do my taxes with a very bad headache. I could not do my taxes with a moderate headache. Whether or not I could do my taxes with a mild headache would depend quite a bit on the circumstances. For that matter, so would my ability to do my taxes with no headache at all. And none of that has really any bearing at all on my ability to perform on things that are broadly similar to an IQ test.
I'm sorry that you've had bad experiences with your coworkers and that those have (apparently) made you jaded. But your view of human psychology is grossly and enormously oversimplified, as (I suspect) is your guesses about the social lives of others. There are more things in heaven and Earth, Christina than are dreamed of in your philosophy, including many, many human minds that *do not* fit the narrow schema that you've tried to define for them.
Doubtless there are some high-IQ-test-scoring people who live either hermit-like existences away from other humans, and know few people in general. Doubtless there are others who have managed to keep themselves siloed away an interact only with people very similar to themselves. But the world is quite a large place, and I tell you quite honestly there are loads and loads of people who BOTH score highly on standardized tests of various stripes AND have some combination of less-privileged backgrounds, breadth-of-experience and intellectual curiosity that ensures that YES THEY DO meet wide cross-sections of humanity. Probably a sound majority of my close friends and family would fall into that category: whatever your bad experiences with other humans, I expect some of them have had worse. Whatever your IQ score, probably some of them have or would score higher. Nor do I have any reason to believe they're especially unique--I run across media suggesting similar combinations of academic aptitude and worldly knowledge quite frequently.
And I tell you frankly, I none of them would have a high opinion of what you've written here. Not one.
[1] Oops, I guess that ended up pretty long for what was supposed to be a quick aside. But it still didn't really touch the main points I wanted to make.
Don't bother writing a reply.
I can see that you very obviously haven't had multiple, long-term relationships with the kind of low functioning / barely functioning people that I'm talking about, who can't do stuff LIKE, FOR EXAMPLE, THIS IS NOT A COMPREHENSIVE LIST, NOR DOES IT REFLECT THE LACK OF ABILITIES OF A SINGLE PERSON, American taxes, job on-boarding paperwork, complete an essay in response to a hypothetical question, comprehend their obligations under American healthcare, know all the steps to take when they get a deep cut in their hand at work (wash the wound, put pressure on it, check the wound after a time, understand that if doesn't stop bleeding, it requires a trip to urgent care / ER, call the supervisor to tell them you're leaving mid-shift and they'll have to find emergency coverage, etc).
I can see that because you said it, gave examples of how some people performed in math tutoring sessions with you (!!!), and because you are also apparently focused on the edge cases of smart people like you, who can write comments like these and tutor others in math but would somehow be incapable of doing your taxes with or without a headache.
(For what it's worth, the three people I wrote about above and a fourth I'm thinking about now would not be interested in reading the exchange we're having, or pretty much any other on ACX. If they were forced to read it (and one would not be able to), they would not be able to pass a reading comprehension test on the discussion with questions like summarizing our respective positions and then quoting sentences we wrote to make supported rational speculations about our respective backgrounds. If it were read aloud to them, they wouldn't be able to follow.)
Please read the Different Worlds essay on SSC and realize you are privileged to be in one, my friend. Those friends and family in your social bubble who would take a dim view of my observation that there are stupid people out there who are so stupid that smart people veryliterallycan't model that stupidity (and thus, don't really believe they really exist) are likewise in a social bubble free of stupid people.
You don't get it.
And that's okay! I didn't either until I started working with them, and I freely admit my social bubble made me so naive that it took me months of observation before I started to understand that there are people out there who are meaningfully not like me - or you, for that matter.
https://hereticalinsights.substack.com/p/iq-and-talebs-wild-ride
The observation that Taleb is a bombast who frequently gets out over his skis and wildly overstates any point he is trying to make is hardly novel, and not one that really needed to be made as such length. It gets quite boring and repetitive after a while; I ultimately stopped reading perhaps a little over halfway through, as it seemed a waste of time to continue.
As to the object-level question I was discussing, this post seemed to touch on it only lightly and only so far as needed for the author to talk more shit about Taleb. Meanwhile, the degree to which the author apparently *doesn't even notice* the underlying issues that would feed into that point was pretty irksome.
As a more general matter of courtesy, I think you should consider that replying with nothing but a link tends to suggest that the link is highly and directly relevant to the issue being discussed (which is not the case here). If you want to call attention to specific parts of a longer post, you can mention which ones and where to find them in the comment. Likewise if you have your own thoughts or responses, by all means type those as well. But there are many times more things to read on the internet than any one person could hope to digest, so dealing solely in long, tangentially-relevant links is not very respectful of the time of others.
>I remember reading someone--I think it was Nassim Taleb--arguing that most of the spread of human IQ scores was actually cause by pathologies that worsened people's cognitive function, or something like that. And while I suspect he may have overstated the case (shock!), it seems likely to be a significant factor.
That doesn't seem to mostly be the case. While syndromic retardation exists and is distinct from familial retardation (see here on the distinction:
https://www.cremieux.xyz/i/153828779/countries-cant-have-mean-iqs-in-the-s
), cognitive benefits are quite apparent while moving rightwards on the intelligence distribution, even independent of syndromic shortcomings.
>If this is sort of thing does play a significant part in determining IQ scores, it would naturally explain a lot of the diminishing returns
There seems to be little evidence of diminishing returns of intelligence, overall, see: https://www.cremieux.xyz/i/100782605/nonlinearities-in-the-relationship-between-iq-and-income.
And this section of the article: https://hereticalinsights.substack.com/i/140130396/nonlinearity-and-decoupling-a-problem-that-is-not, e.g.:
>Brown et al. (2021) used data from four longitudinal cohort studies with 48,558 participants in the United Kingdom and United States from 1957 to the present for the the relationship between cognitive ability measured during youth and occupational, educational, health, and social outcomes later in life, and found that most effects followed a linear trend.
Indeed, if anything, the opposite is often the case, with growing returns to intelligence.
See: https://humanvarieties.org/2016/01/31/iq-and-permanent-income-sizing-up-the-iq-paradox/ which notes the benefits of a log-income/IQ model, in which each additional IQ point corresponds to a percentage increase in income, which would of course grow in absolute terms at higher IQ levels.
https://www.cremieux.xyz/i/100782605/nonlinearities-in-the-relationship-between-iq-and-income
https://humanvarieties.org/2016/01/31/iq-and-permanent-income-sizing-up-the-iq-paradox/
https://sci-hub.st/https://journals.sagepub.com/doi/10.1177/1745691620964122
>I also don't see very many highly-charismatic super-smart people.
https://xcancel.com/cremieuxrecueil/status/1677791286848356353#m
>An IQ 80 person isn't going to be very persuasive or capable, and is likely to have lots of other "co-morbidities"
https://www.cremieux.xyz/i/153828779/countries-cant-have-mean-iqs-in-the-s
>Are chess geniuses (all very high IQ)
https://www.astralcodexten.com/i/135293928/do-the-assumptions-that-make-intelligence-a-coherent-concept-hold
>many of both sets tend to be cranks and pretty incapable outside of their narrow specialty.
https://en.wikipedia.org/wiki/New_chronology_(Fomenko)#Reception
https://www.azquotes.com/quote/615912
https://en.wikipedia.org/wiki/Bobby_Fischer#Personal_life
There's IQ, and there's all sorts of other qualities that affect, and are affected by, IQ to give you general effectiveness. The easiest thing is working memory. Imagine if you had a 100 IQ, but a working memory that could hold 256 concepts as easily as you hold 7+-1 today. I think that'd leave a 140 IQ person in their dust.
Now that I write that, it might be just as impossible for that size of working memory to occur in a human brain (diminishing returns again) as it would be for a 200 IQ.
However computers gave exceed that limit since the beginning. And yet. Here we are. Still not overrun by robots.
I don’t think you can measure IQ like that. It’s a mapping to a standard deviation. Beyond 160 it kinda breaks down. ChatGPT assures me the smartest person out of 8 billion would max out at 193 but that’s not measurable anyway.
To your point, it's interesting that neuron counts for large animals with big brains (elephants and whales) seem to show that their brains are a lot less dense than ours. Meanwhile our brains aren't very neuron-dense compared to, say, a crow. Which seems to imply that there's an architectural limit of some sort which prevents large, neuron-dense brains. My suspicion is that we're out on the bleeding edge of that envelope (we have large brains that are also more neuron-dense than they should be) and that this is part of the reason why our minds are so unstable and prone to weird failure modes.
Birds need to have compact light brains, because they have to be able to fly. Some of them grow parts of their brain in the season when they have to sing, and let them atrophy after.
Elephants probably don't care how big their brain is.
We have space limits and we also need a lot of compute.
But that begs the question as to why our brains aren't denser, because if our brains were as dense as corvid brains then the space issue goes away. My point is that there are case-by-case explanations, but the overall trend seems to be that you can have small, dense brains or big, diffuse ones but not both. And the result seems to be that there's an absolute number of neurons in a given brain that it's hard to exceed. I'm not putting this forward as some sort of general hypothesis, mind, just an observation that seems to gel with the OP's point.
I've heard some people say that chess super-GMs aren't necessarily that high in IQ. Now, they clearly have some sort of outstanding spatial intelligence/ability to understand a sequence of moves. I've never taken chess all that seriously, but for me it's incredibly difficult to visualize something a whole bunch of moves in advance, even if the notation is listed. When I see Hikaru or someone on that level spit out 10 moves in a row, I'm completely astounded; but if I spent a ton of time studying chess, I'd expect my abilities in that area to improve. Also, I know there have been studies of chess players and their ability to memorize the positions of pieces on the board, and they do much better when presented with realistic positions and not just randomly scattered pieces. They're clearly learning how to chunk the pieces into units and memorizing those units. I think chess ability is not quite as correlated with IQ as you might think.
I'm sure there are a fair number of chess prodigies who have demonstrated great accomplishments in other areas, but some of them, just like Nobel winners, have been cranks. Speaking of the Nobel, didn't we just have a Nobel disease discussion in one of the other open threads? Some people, for lack of a better word, are a little bit crazy, no matter how intelligent.
But I think you are somewhat underrating the intelligence of politicians. A lot of them were Rhodes scholars, valedictorians, etc. For example, even people who hate Ted Cruz almost always agree that he's brilliant. I'd wager that most of the 535 reps and senators have an IQ over 120, maybe even 130.
Also, I'd dispute the idea that 120 is "genius" level. That's barely one standard deviation over the mean. Something like 5% of the population is over 125, and I wouldn't say everyone in the top 5% is a genius.
Now, I think you're right that the effects of IQ mostly level off at a certain point, and other factors play more of a role in success. But I think that at least up to 130, maybe even up to 140 or so, there are still pretty considerable gains to be had. I've taught high school for 15 years, and there's usually a pretty noticeable difference between the kid that gets a 35 on the ACT (99th percentile) and someone down around 30-31 (roughly 95th percentile). I'm willing to bet that on average, the life outcomes of the kids getting a 35 are quite a bit better than the kids getting a 30.
Sure. There are differences. But each 10 points is a *reduced* effect compared to the last 10 points. And that's the critical bit--you can be as super-smart as you want, but if the returns asymptote to 10% better...and especially don't generalize to all areas...
And I'd strongly push back against the idea that politicians are genius level. Smart (as in above 100), probably. But mostly they're just *polished*. And that doesn't actually take much smarts, just practice and preparation.
> IQ, actual IQ, tends to generalize very very well -- to HARD problems. It is notably weak on actual intelligence tests (my friend the genius used "mind reading" on one segment (aka anticipating what the tester would say before she said it), because he was so bad at the actual component being tested).
People are using IQ in a very confusing form here. What’s measured in tests is IQ. What’s that a proxy for is most often called g (ie general intelligence).
Ted Cruz graduated from Princeton and Harvard Law where he edited the Harvard Law Review. Base rate analysis suggests that his IQ is very high, whether or not you agree with his politics (and I do not).
It generalizes to *intellectually-accessible* hard problems. Not all hard problems are, in my experience, suitable to solution via thinking really hard. Most interpersonal problems aren't--in fact, thinking too hard can actively be a detriment in those.
Most of us can't multiple 5-digit numbers in our heads. Some of us can do it stepwise, using a learned algorithm. Most LLMs cannot reliably even add 5-digit numbers but can maybe pass the bar exam. A $5 chip can add, subtract, multiple and divide 5-digit numbers in microseconds.
A hard problem is relative to something. High IQ individuals are statistically better at problems that are hard for humans. "Dumb" animals beat us at all kinds of stuff but have very limited capacity to generalize, eg spatial processing to intercept prey doesn't enable geometry or calculus.
Whether returns to intelligence are linear are not is not really relevant to ASI but it is relevant to takeoff speeds. As long as AGI/ASI is possible and progressing relative to current tech trends, the returns to intelligence aren't relevant over historical timelines.
So, let's lay out a very basic scenario. Assume IQ works from a base 80 and scales logarithmically, so at IQ 80 we've got the equivalent of a dumb person, at IQ 800 we have something equivalent to the smartest person alive, and at IQ 8000 we have a low-level superintelligence. Let's also imagine there are no significant methodological improvements or anything, the IQ just advances in line with Moore's Law, doubling every two years. We basically just keep running the same models with more and more transistors and there's no feedback loops. And tomorrow OpenAI releases the world's dumbest proper AGI, at IQ 80.
So in 2029 the IQ of our dumb AGI is *320* and it's getting around average human level. In summer of 2032, it passes the smartest person ever level, and by September 2039 we have a true superhuman ASI.
And these are really conservative estimates about the returns to intelligence and it lacks any feedback loop, where Moore's Law doesn't speed up development when we have millions of AI agents as smart as our best minds working on better GPU units or something.
As long as AGI/ASI can grow relatively in line with software/computer growth/improvement, that will overpower any low returns to IQ just through compounding improvements in (historically) short timeframes.
Except asymptotics are asymptotic. In this model, you can't *ever* get above X% higher *no matter how much effort you throw at it*. Logistic is not logarithmic--in a standard scaled logistic curve you can't get above 1 (arbitrary units), you can just get arbitrarily close.
And I see no evidence anywhere that self-improvement via ai is actually meaningfully possible.
Oh god bless, reading is hard. My bad.
I think the standard Bostrom answer is that since we don't see declining returns to IQ in subhuman intelligences, like bug->rat->cat->monkey->human, why would we assume some ceiling right around human IQ.
Yeah, if you look at the left side of a logistic curve, you see increasing returns. That's the whole point. Going from (conceptual) 10 -> 20 -> 30 shows improved returns. But logistic curves[1] always turn over. And my experience has been that there is already diminishing returns at "human scale" just going from normal -> smart -> really smart.
[1] The number of increasing faster-than-linear, positive feedback-loop processes in nature is really small and carefully constrained. I see no reason to believe that intelligence is one of those. Most of them are logistic instead. So the default assumption is that it's logistic.
Your personal experience is on the wrong scale because it's on a human scale. The dumbest human is still in the top 99.999999999999% of intelligence of all living beings in earth history. The median IQ on a scale of all living entities ever isn't a dumb person, it's like a frog or something. We do not begin to see declining returns to intelligence as we go from frog to lizard to cow. Your observations about declining returns to intelligence are focused on the extreme right end of the distribution.
No one is going to look at human intelligence, which in the evolutionarily trivial period of 100k years has become so dominant that we've become an extinction event for other species on par with the dino meteor and go "Yes, we have clearly reached diminishing returns in intelligence." If you keep it to human scale though, I will concur, I often see minimal personal returns to IQs over like 130. That's real. But the right scale is all intelligence, not just human intelligence. And a graph of all intelligence is not a logistic curve, it's an exponential curve at exactly human intelligence that collapses at exactly about as smart as human lawyer and would look absurd if you actually drew it.
I'm skeptical about logistic being the default. I'd certainly cede that when there are finite resources at play, conditional on a fixed level of tooling/technology/effort/etc.
But the purported logistic curve for oil extraction got blown out of the water when we discovered fracking. I'm sure there's still *A* logistic curve under there somewhere, but it looks nothing like the one we imagined.
Same spirit, there might be resource limits on intelligence, and probably are on wetware like brains. But I wouldn't take a priori that those limits are the same for silicon.
In reality there’s a limit that is being reached as we speak in compute. Therefore gains have to come from elsewhere.
Transistors are nearly as small as they can get, training frontier models already consumes gigawatt-hours, and doubling that every two years would demand something like the output of a national grid. The cost of new fabs runs into the tens of billions, so the money is as much of a constraint as the physics or the power. Moore’s law was steady while it lasted; what we face now are plateaus, where real progress depends on smarter algorithms and efficiency rather than brute-force silicon or electricity.
(For context, GPT-4-class models are thought to have used on the order of several gigawatt-hours to train. If that demand doubles every couple of years, by the early 2030s you’d be talking terawatt-hours for a single training run — the kind of consumption that starts to match or exceed the annual electricity usage of a small country.)
WoolyAI’s answer seems to better address your question than mine. If you feel like the standard Bostrom answer fails because we’ve reached some asymptote, I can’t prove you out of it. But I can ask you to imagine a similar conversation between chimps about whether or not there can be an intelligence greater than theirs. When the idealist notes that chimps are smarter than bugs, the cynic might say “if you look at the left side of a logistic curve, you will see increasing returns”. They might observe that the smartest chimp can make sentences only slightly better than the average chimp. But they’d be wrong about a logistic pattern to intelligence. Or at least, the logistic curve that applies on the level of species is very different than the one that applies to a single species.
It still seems hard to see why, as a fact of nature, the curve for intelligence would happen to flatten out right at human level. It certainly isn’t hard to *imagine* how useful it would be to be even as smart as ten von Neumanns working together in a room, and no particular reason to imagine even *that* is the upper limit.
If you perceive a flattening in human intelligence, it might be because we have evolved for group behavior, and isolated far-right-tail individuals are hampered by the paucity of peers to collaborate with. Or that intelligence is a recently-evolved attribute of humans that is still associated with various other cruft in the genome (like lack of charisma?) that interferes with what the individual can accomplish. Or that our monkey-nature makes sure that when a peg gets too high it’s pounded down.
It might be that AIs created by training on human text/behaviors can’t ever exceed previously displayed human abilities. But again, it’s hard to imagine what kind of law of nature would guarantee that. A collection of only slightly above human-average AIs might be free of Dunbar’s limit, for instance, and able to cooperate directly with more peers, accomplishing more than a similar collection of human AI researchers could.
All of these answers seem to be arguments from incredulity. "Hard to imagine" isn't a convincing argument.
And the idea that "all life" is the right scale just seems to be assertion, rather than argument. It assumes that intelligence can be meaningfully generalized as a single scale running over very disparate creatures, which is a massive smuggled assumption/stolen base.
120 IQ is squarely in the range of midwits. The sort of people who don't use Quantum Effects in their protein folding calculations. For twenty years! That's twenty years of research down the drain, and this is not exactly a sign of "actually high intelligence." Many years of stupidity, of PhDs showing a complete and utter lack of understanding that "maybe we're doing something wrong"?
Chess is another midwit sport. They do not have very high IQ, as they are subject to trolling, in general, by people less good at chess but smarter than they are.
There are absolutely tons of highly charismatic "super smart" people. You don't know about them because they are either comedians (you can't tell me the good comedian isn't a shmart guy, because I know he is), or too busy doing ten different jobs under ten different pennames.
Assume you lack the capability for metaintelligence. People more than about 10 iq points higher than you, simply look "absurdly lucky."
120 isn’t, by definition, midwit. Unless you have a very broad definition of mid. It’s about the equivalent of 6ft 1 1/2 in height.
> Midwits are "in general" smarter than 100.
Again by definition a mid intelligence would be around 100. I’m not certain you understand the nature of the IQ scale, it’s a relative scale.
I think you are making up your own definitions here. It’s not that common for people at 1.5 SD above the median to actually believe any of that.
That said I do believe that there’s a step change for the top level of intelligence - which isn’t measured in IQ which assumes a bell shaped curve, but that’s hard to measure. By their fruits shall ye know them.
It’s not that radical to say we are all stupider than Von Neumann
Well indeed trusting the science is not itself a scientific idea, of course.
I’m still unsure who you think are the brightest people on the planet except for a few mediocre comedians and your friend who was joking about winning the Nobel prize for world of Warcraft.
Can you give an example of someone who is both highly charismatic and *by normal measures* super smart? Intelligent (ie over 100 IQ), sure. I'll buy that. But *genius level* (however you define that)?
Ad I think your causality is backward--you'd need to show that *every* (or *most*) geniuses are also super-charismatic, not that *some* are. What I'm questioning is that IQ *causally predicts* charisma. I believe that the two are actually uncorrelated above some relatively low threshold. Which would totally be fine with "there exist super smart, super charismatic people". But wouldn't be fine with "if AI gets super smart, it will therefore also be super charismatic".
Are you *sure* that those people are really smart? I see no evidence of IQ tests for them. I think you're letting yourself be biased by "is really good with words === is super smart". And that's exactly the thing under question here, so assuming that "good with words" === "super smart" is rather circular.
I'd be fully willing to accept that they're *above normal*. But genius level? That I doubt.
And you still haven't proven the real problem here, which is the reverse of this line of argumentation. That *all* high-intelligence people are *also, intrinsically* charismatic. And that I'm 100% sure is just flat false. Because I know plenty of people who have scored really high and done really great intellectual feats who *suck* at people-ing.
That feels like a bit of argument by definition. Sure, if you define "really intelligent" as "also really good at everything else" (rather than the normal methods of measuring/defining that), the problem resolves itself circularly. But that's not really a meaningful discussion.
I will also note that *thought experiments* are really easy to do even as a mid-wit. Actually proving it and doing the work (ie building the framework for general/special relativity) takes the smarts. So no, I'd not say that just doing that is winning a Nobel prize, even if a Nobel prize results. Or is evidence of being super smart in and of itself.
I think one crux is that we're used to assuming intelligence is more or less fixed. But in the context of AI, a mind is a piece of technology that can continue to be improved, and the rate of improvement can increase if you're throwing more intelligence at the problem
I’ll try. The Steelman would be that genuinely, along some dimensions, we should think that logistical increases in intelligence can lead to exponential increases in outcome. It just depends on what you measure.
I’ll analogize to sports. With respect to tennis aptitude, there are strong diminishing returns in number of points. Federer, the world’s best, only won 52% of points he played. And with respect to speed, the diminishing returns are obvious: his shots aren’t much more than twice as fast as mine are. But with regards to outcome/money/prestige, that tiny edge translates to world championship, millions of dollars, and ultimate prestige.
If an AI with a “small” boost in IQ knows enough to make a virus slightly better than Covid, why does it matter that improvements are logistic when millions are dead anyway?
Those sorts of sports outcomes are *binary* and *rivalrous*. Only one person can be the best, and the label itself, regardless of the actual performance, is what matters. Most things aren't that way, and that's not really responsive to what I'm asking here.
I'm not arguing that AI couldn't be *devastating*. But people manage to do that just fine by themselves. As does *unthinking nature* (cf covid itself). Intelligence is not load bearing there.
What I *am* questioning is the idea that you can get superhuman intelligence in all aspects and have it be *categorically* different than what we have now, to the point that we won't understand it. I suspect that it will, instead, be *smart*, but *understandably so*. And suffer from all the foibles attendant to any smart human (or at least the analogues of them). Including having to specialize.
Originally, I wasn't going to comment on this thread. Since you asked for dissenting opinions, whereas I made pretty much the same point as you a few weeks ago [0]. (Namely, that the ROI from intelligence has diminishing returns, depending on the environment.) (It's like a lock & key. The specificity of the key depends on the complexity of the lock. And the *value* of "unlocking the lock" depends on what the lock is guarding.)
Except, I do actually sorta disagree with:
> Including having to specialize.
Well, maybe not. The current advantage of LLM's is that they're able to read the entirety of the internet. I'm reminded of Dmitri's post about how LLM's should be regarded as SuperHistory instead of SuperIntelligence. And there've been discussions about how "data is the new oil" because LLM's might one day run out of internet. (I can't remember links, will look though.) [EDIT: who cares because there's a million hits on DDG.]
[0] https://www.astralcodexten.com/p/book-review-if-anyone-builds-it-everyone/comment/154925255?utm_source=activity_item#comment-154926438?utm_source=activity_item
[1] https://thedosagemakesitso.substack.com/p/its-not-superintelligence-its-superhistory
> To wit, I think most people have a pretty terrible understanding of what IQ means, even here. For one thing, it's an ordinal ranking. The notion of something like "linear returns" is a bit sketchy.
Thank you for writing this, so I didn't have to write the same thing. At best it is a metaphor, like "look at the differences at human intellectual performance between IQ 100 and 120, or 120 and 140, or 140 and 160, and kinda try to approximate this difference beyond the human range".
It is also important that *potential* ability is not the same as *actual* ability. Actual ability requires time to study and practice... and even the smartest human on the planet only has 24 hours a day. Perhaps they could become the world's greatest chess player, or the world's greatest politician, or the world's greatest expert on quantum physics, but they definitely do not have time to do all of that (plus many other things that are also potentially within their reach).
But this limitation does not necessarily apply to artificial intelligences, which can simply do things faster and remember better and maybe scale up if necessary. Consider that the LLMs are already experts on *everything*. Sure, they make mistakes and hallucinate etc. But still, they can talk, at a certain level of quality, about million different topics, most of which I know nothing about.
Even this...
> In fact, I've often seen *regressions*--people who are really smart often struggle to talk meaningfully to "normal" people and fail to connect to how they see things. Which suggests to me that IQ starts losing a lot of its punch the higher you go. And may actually correlate with *reduced* performance in other aspects of life.
...if fundamentally a problem of time and attention. The people who struggle to talk meaningfully to normies probably have the problem because they don't practice this skill enough. Psychology or rhetoric or whatever is simply yet another thing to study that competes for your time with the other things you study.
> Even *within* their broader specialty (e.g. physics), a genius at quantum mechanics isn't better than most smart grad students at, say, general relativity--the skillsets and knowledge base are too different. And they're not good at all at, say, organic chemistry.
Again, specialization requires time, you can't specialize on everything. The genius at quantum mechanics probably has enough intelligence to master organic chemistry, but doesn't have the time.
Yes, comparing hypothetical things like "the smartest one among 10^50 humans" and "the smartest one among 10^60 humans" is not helpful.
I can't even guess whether the difference would be very *small* (both are them are already using the biological potential of human brain to 99.999999...%, and the extra 9s have diminishing returns) or very *large* (both of them are some freakish mutants with magical powers, and the latter has even stronger magic than the former).
And if human brains have a biological limit of how smart they can get, then the values for "smarter than that" are undefined on the IQ scale, and we would need a new way to express it for the superhuman AIs. (Which ironically reminds me of how the definition of IQ had to be changed from "mental age divided by physical age" once we considered humans that are smarter than the average X years old human for any value of X. Now this would be kinda similar, for AIs that are smarter than an X-percentile human for any value of X.)
For a biological human, once you are sufficiently smart, *time* is the bottleneck. (And other things, like conscientiousness or wealth, but they kinda indirectly influence how much of your time you *can* and *will* actually spend studying the things.)
Seeking recommendations for novels.
I like fiction that aims at, and mostly succeeds, in capturing a time, a place and a culture.
John William De Forest described the "Great American Novel" as a book that would capture the "tableau" of American society, "paint the American soul" and capture "the ordinary emotions and manners of American existence".
Tom Wolfe's The Bonfire of the Vanities (1980s) might be the best example, but I'm not interested only in American fiction.
Our time (roughly 2008 global financial crisis to now, through Trump/Brexit; wokeism; the pervasiveness of the right/left dichotomy; smartphones / social media / techno-globalism; the pandemic and pandemic response) deserves a literature.
Who's writing it?
I don't know of any post-2008 period fiction, unless you want to count Stephenson's _Reamde_, which spans from the American Northwest to the coast of China in a world with a World of Warcraft-killing MMO.
Generally, I can recommend Amor Towles. Listened to _The Lincoln Highway_ on a road trip (fitting!); it paints a vivid picture of the US circa 1954. He has two other novels and a series of short stories (_Table for Two_), all appear to be period fiction, all well-received.
Thanks Paul. Towles has been recommended to me a few times now... Appreciate it.
I see nobody has mentioned "An Absolutely Remarkable Thing," which was described (in 2018) as "the best book I've read about what 'now' feels like." I'm partway through and quite enjoying it.
This is the kind of thing I'm looking for. This one is new to me, although Hank Green has been on everyone's radar. Thank you.
Depending on your preferences and standards, you might really enjoy Guy Gavriel Kay.
Kay is a fantasy author, but he's a very particular sort of fantasy author. His MO is picking a particular time and place in history, researching it very thoroughly, and then creating a fantasy setting that very strongly mirrors its culture, geography and politics, while still leaving him room to tell whatever story he wants to tell.
For example, The Lions of al Rasan (one of my favorite books ever) has a setting very closely based on the Iberian Peninsula during the 11th century. Most of the characters are original, but one is significantly based on El Cid. The story and setting is deeply involved with the interplay between three main religious groups, which have fictional names and beliefs, but are clearly identifiable in the context of the setting as stand-ins for Christianity, Islam and Judaism (with one main character belonging to each). While it's obviously not a guide to the historical *events* of that time and place, it would take an author of rare talent to write historical fiction that was anywhere near as powerful at capturing the emotional experience of living through those sort of historical events.
His Wikipedia article lists the inspirations for his various novels (note his first works, The Fionavar Tapestry, don't follow this pattern):
https://en.wikipedia.org/wiki/Guy_Gavriel_Kay
Sounds amazing … although not for me. I’m looking for books written recently (since 2008) that capture our present moment and culture.
I can’t tell if you’re just looking for 2008-present American fiction, but if not:
Gary Jennings’ historical fictions Aztec and The Journeyer are mind-bogglingly good. The amount of travel/research the author put into creating these novels is equally mind-boggling. Highly recommended.
Sorry for my lack of clarity. I’m looking for post-2008 fiction that captures the culture, but not necessarily American. Thanks for the recommendation. I haven’t read much historical fiction.
Patricia Lockwood (No One Is Talking About This) is a contender specifically if you're focusing on the feat of capturing zeitgeist and not necessarily on other qualities that make a novel 'great'. Although in my subjective, ill-educated opinion her prose is also very, very good.
Have heard of this but haven’t read it yet. (Lockwood has a new novel also that’s getting attention … and was the subject of an amazing profile piece in New Yorker recently: https://www.newyorker.com/magazine/2025/09/01/patricia-lockwood-profile )
Thanks for the recommendation.
“We are talking now of summer evenings in Knoxville Tennessee in the time that I lived there so successfully disguised to myself as a child.”
James Agee, *A Death in the Family,* US, pre-war years of 20th Century.
John Grisham seems to have set all his novels (well, the ones I read) in the TN-MS-AL area, late 20th century (present day when they were published). They felt reasonably immersive, assuming one enjoys trial lawyer drama.
I guess "The World According To Garp" hits several of those for... 1978? The Ellen Jamesians certainly capture the insanity of radical activism, and it features the line "there's no sex like trans-sex", so like... that's a recurring topic.
It sure does for 1978! Keen to find something for post-2008 world. Hasn’t been many I think. Someone elsewhere suggested The Mandibles and Mania by Lionel Shriver (from 2016 and 2024 respectively).
Edit: I’m sure it’s true (and always will be) that there’s no sex like trans sex.
I would recommend The Name of the Rose, by Umberto Eco. It's a murder mystery set in a 13th century monastery, and succeeds to great degree at presenting an immersive and detailed view of a very foreign (to us) world, and how the people who inhabit it think.
Also one of the main characters is a proto-rationalist who applies skepticism and empiricism to try to solve the mystery, so it may be of special interest to readers of this blog.
Most of Eco's novels do something like this -- Baudolino for the 12th century, The Island of the Day Before for the 17th, and The Cemetery of Prague for the 19th. Part of their effectiveness, I think, is in mixing the themes and conventions of that era's storytelling in with the historically accurate details -- respectively, bestiaries and fantastic travel tales; cosmology and picaresque novels; conspiracy theories and feuilletons.
Thanks. It’s a fine book … not what I’m after though (literature of the post-2008 world).
I would recommend reading Narcissus and Goldmund, by Hermann Hesse. It's set in Medieval Germany, and is kind of about the relationship between this Catholic monk (Narcissus) and a wandering, free-spirit character (Goldmund) as he searches for the meaning of life. It's mostly about this Goldmund character and follows him from a young age to late adulthood.
It's not modern or American (published in Germany in 1930), but it's the first book that came to mind when I read "aims at, and mostly succeeds, in capturing a time, a place and a culture".
All of Hesse's other major novels are also fantastic, "The Glass Bead Game" being the best. But that one is set in a speculative future and doesn't capture anything that really exists. It does feature three short stories at the end of it, though, that the main character "wrote" as he imagined himself living in different countries during different periods of time in the past.
Haven’t read this. Did read Steppenwolf recently and will revisit that again and again. IT feels relevant to this point in time (even though it was written ~100 years ago).
Steppenwolf is also a favorite! Honestly I haven't read a book by him that didn't deeply affect me. His personal journey and thinking are somehow all very relevant today.
I've been reading a different non-fiction German writer whose stuff is from the 1930-40s, and his stuff directly describes a lot of what goes on in America today. Now that I think about it, some early Nietzsche stuff that I've read does as well, and he was even earlier... Is America just Germany 2?
Also sorry, I misunderstood your request. I thought the first sentence was what you were looking for and the last one was an afterthought rather than further specification.
No problem. I can see from the variance of replies that my original post was obviously not clear...
I am the only one who didn't like One Battle After Another? I couldn't get over the cartoon and stock characters.
I really felt like I got my money's worth. I wouldn't spend 3 hours watching itnat home, but for me spending 3 hours watching it in a theater was satisfying.
I don't think there is a bigger (even political) message. It sort of reminded me of a slapstick superhero movie or something, but with actors I love watching.
I thought it was decent! After the first 30minutes, I wasn't expecting much, but it picked up the pace after the time skip. I agree that the characters were cartoon/stock characters, but funnily enough found it to be one of the movies strengths. Different preferences I guess?
The car chase scene was especially enjoyable, a real 'thriller' moment. Other than that, it was also more political than I would like, and wish they would have left the ending more open.
LOL I thought the car chase was interminable. I guess that's what makes horse races.
But, here is a question: Was evil cartoon guy actually chasing stock teenage girl character, or simply making his getaway and happened to be on the same road? He had no reason to think that she had escaped, after all.
>it was also more political than I would like,
I don't mind a film being political. The problem is that the politics was facile. There are real issues that could have been explored, re political violence or re immigration or re law enforcement, but nothing was.
>and wish they would have left the ending more open.
Yeah, and "bad guy gets comeuppance / father and daughter draw closer / daughter carries on parents' political advocacy" was not exactly an unexpected ending.
I agree that the chase doesn't make sense and the politics was facile. I watched it as I would watch a Tarantino-movie: over the top characters do over the top things, but it is still kind of entertaining. Might not be what the producers intended, or what the people you have heard praising the movie are saying.
>Might not be what the producers intended,
I have a friend who loved it, because he thought it was a spoof. I certainly don't think that was the intent. I would love to see an Armando Iannucci version of the film.
>I watched it as I would watch a Tarantino-movie: over the top characters do over the top things
I can't say I am a huge fan of much of Tarantino's recent work, other than Once Upon A Time in Hollywood. He has really wasted his talent IMHO. And that certainly wasn't what I expected from a Paul Thomas Anderson film.
ACX 2024 grantee here with an update on the EEG entrainment project.
The study “Learning at your brain’s rhythm: individualized entrainment boosts learning for perceptual decisions” claims that entrainment (flashing a bright white light) at a person's individual peak alpha frequency (IAF) helps them learn to distinguish two types of patterns faster. I'm replicating this study.
Three weeks ago I provided an update on an Open Thread (https://www.astralcodexten.com/p/open-thread-398). I now have a video of the demo of the coming replication from the latest ACX meetup in London: https://www.youtube.com/watch?v=pP5dO97l9Bo. In the video you can actually see people getting their brains entrained and solving the study tasks while wearing an EEG headset. You can also see some charts pointing towards the effect being actually real as well as learn more about what the study did and why I am very optimistic about it actually replicating.
I’m looking for 10 in-person participants in London willing to dedicate 4 hours of their time for the replication. If you want to volunteer — please sign up using this form https://forms.gle/X37zyTV3KhbSb3Ze9, your help will be greatly appreciated. From the previous update I have 11 sign ups but only half of the people actually confirmed their participation over email, so I'm still looking for more participants.
I’m also looking for 15-20 remote participants with their own EEG hardware willing to be remote participants, run the replication on themselves and provide me with their results. To volunteer — sign up using this form https://forms.gle/G971tuMUfGqywEG38. For making this effect into a production system that helps people learn it would be amazing to first see if the effect replicates on a variety of different hardware .
PS: The project’s code is available on GitHub: https://github.com/eleweek/EEG_entrainment. The final replication results will be published on my substack: https://psychotechnology.substack.com/.
I wrote about self compassion, perfectionism, current trends in the tech industry, and how to cope with the diminishing prestige of the software engineering profession. "self compassion and the disposable engineer" https://dlants.me/self-compassion.html
I enjoyed your essay a lot. On the topic of "Hard Work": Graham mentioned that people with both talent and the drive are rare. I think people underestimate the probabilistic forces at play, in the sense that if Bill Gates didn't have both talent and drive, he'd be somewhere else in life. Like you identified, our culture tries to glorify these people and many wish to emulate them. So there is a certain tension between what is already there in people and what people want to create within themselves.
I think it's important to be able to strike the balance between the two poles: On one hand, striving to do your best can be good, but on the other hand, we all need to accept our limits with compassion. Trying to be something one is not will lead to unhappiness while only accepting what is already there results in no progress ever beeing made.
Thanks for your comment!
I struggled a lot with this essay because there was a lot I wanted to say, and I ended up cutting many things to focus on the emotional self compassion and disposability message.
A future essay will cover the question of hard work and striving. I think we often use physical language to talk about mental exertion - gritting our teeth, putting our head down, etc. but the picture around mental exertion and exerting your will is really murky.
There's a lot of things that have to do with executive function, and some that have to do with motivation. In all of the literature I've read, it's not clear whether there is such a thing as a conscious exertion that improves your performance at the given task. In many cases, trying harder actually makes your performance worse.
It seems there are some cognitive processes that can be helpful around motivation and behavior change (as I linked in the stronger by science article), but little of that ends up looking like the common image "working hard".
Personally I've adapted a philosophy that is summarized as "gentle re-engagement".
So yeah that will be a future essay, I think it's interesting stuff.
Personally I like the taoist philosophy of wei wu-wei (doing not-doing). It's similar to what you call gentle re-engagement, but leaves out the whole thought of re-engagement out ;)
I'm probably not practicing it correctly (afterall I'm not a taoist sage), but in my understanding, it's mostly about doing what arises in the moment. This is very unhelpful advice for most people, since what people typically perceive as arising are things they consider vices, but if you are able to be gentle enough with yourself (and chewed the whole eastern philosophy pill long enough) what comes up... turns out to be useful things. Because if you think about doing your taxes, you'll just start doing your taxes. If you think about watching a movie, you just watch a movie. If you keep thinking about things that you currently can't change or are not appropriate, you're simply not in the moment enough and too distracted by the illusions of life for wu-wei...
I also find taoist philosophy interesting! Highly recommend https://www.edwardslingerland.com/trying-not-to-try for a neuroscience / modern take on it.
I enjoyed the essay, thank you. I always felt my own struggle with perfectionism, as a software developer, comes because the job does demand you to find perfection of sorts: you always have to be on guard against introducing bugs into the code. You cannot be an optimist as a software developer, that just means things will break.
I never felt like I struggled with feeling like I have worth because of my job, because on my end I treat companies as disposable myself. I'm in this job because it's fun and it pays well, I don't especially care about what my current employer thinks of me. Then again, maybe that's privilege, I'm only doing full stack web dev, so not a high pressure job, and I may be underestimating how difficult it will be to get another job if my company goes under or I somehow get fired (which I feel is unlikely).
I think feelings of low self-worth for me are more driven by a feeling that I don't know how to get on with people, but I think this essay touches on that too, with the general theme of self-compassion.
My employer introduced a High Deductible Health Plan option this year, but in direct contravention of everything I thought I knew about HDHPs they priced the premium higher than the HMO premium. Specifically, for individuals the HDHP premium is $20 higher per month and the only "perk" is a $250/yr HSA seed. So if you opt for the HDHP you net out $10 ahead for the entire year, at the cost of having to pay ~1000% more out of pocket if you ever consume any healthcare.
This seems like such an obviously horrible deal that I can't imagine anyone taking it, but if that's the case why did they bother introducing it in the first place? The open enrollment email alludes to vague "tax benefits" associated with having a HSA, but is there any possibility that those are good enough to justify opting for it?
As a young man in perfect health I'd love to have a real HDHP option that cost less in premiums than the HMO, so I'm trying to understand why they would have set things up this way. The email said that they had "heard employee requests for a HDHP" but introducing one structured like this seems like it's worse than just not introducing one at all and continuing to only offer us HMO and PPO. It feels like if the cafeteria said "we heard your requests for ahi tuna, so we've added some cat food to the menu". Does anyone have any insight on a possible explanation other than this being an institutional "fuck you" to the HDHP-wanters?
Thats pretty aggressive. The HSA has tax benefits that make it worth it for a person who doesnt consume much health care. But those benefits are given to you by the IRS not your employer. So by charging more, your employer is essentially trying to take from you a portion of the benefits given to you by the IRS. Usually an employer leans into tax advantaged ways to compensate employees. Here they are doing the opposite. Shrug
Never attribute to malice what can be attributed to stupidity. Probably the prices are just what a computer spit out when some insurance agent plugged numbers in. Maybe there are sound actuarial reasons for those numbers; I don't know enough about the industry to comment on that. But in the absence of other evidence, I wouldn't assume evil motives on the part of the employer.
Wow, I've never heard of that before. Usually, companies *want* you to take the HDHP, hence pricing it the lowest and making an extra HSA contribution.
The email also mentioned something about "balancing the risk pool". I work at a university so there's a bimodal distribution of old-ass faculty members and young staff members on the insurance, I guess they didn't want all of us staff fleeing to the HDHP and leaving the HMO pool full of sickly Boomers?
Good chance this is it right here.
An HSA plan is a huge tax loophole but it's main advantage is as an investment, not as a way to reduce healthcare costs month to month or year to year. But if used correctly, contributions, capital gains/interest, and distributions are all completely tax free. Other tax advantaged plans like a 401(k) or a Roth IRA have one or more of these tax benefits but none have all 3. So basically you get to sock money away into the stock market and never pay any taxes on it at all, provided you comply with the rules of the plan.
The medical expenses you can spend the money on are pretty broad, dental and vision, stuff like chiropractic, acupuncture, prescription OR over the counter medicine, even apparently a gym membership or diet expenses if you can get a letter from a doctor that they are medically necessary? Feminine hygiene products, diapers, first aid kits, assisted living, etc. It's a long list.
But the real tax advantage comes because you don't have to take the reimbursement when you pay for those expenses, and you shouldn't. You leave all of your contributions in the plan, invested, growing tax free. And you save all of your receipts for eligible expenses and don't take distribution until retirement as you need the money, using up your "banked" receipts over the years. If you don't end up with enough medical expenses accumulated to withdraw the full balance then and distributions just become taxable at your current retired tax rate (assuming you are over 65). So it basically turns into a normal 401(k) plan at that point.
If you make the max contribution of $4,300/year for 40 years and invested with 5% returns is a total cash investment of $172K and gains of $347K, total HSA balance of $519K at retirement. If you get 7% returns you end with $686K in returns for a total balance of $858K. Hopefully you don't have that many medical expenses to reimburse but again it just becomes a normal 401(k) plan if you don't. So you can look at it as part of a primary retirement planning strategy that gets you completely tax free medical care for life if you play it right.
My HSA only has two investment choices: money market fund A, and money market fund B, which sort of limits the growth benefits. The reason for this choice is obvious if we're pretending it's for health expenses, not retirement. I might need it at any point.
I do count it as "cash" investments in my overall portfolio which lets me a little more aggressive in my other vehicles.
On the other hand, while my plan has a web form to upload receipts, there is no need to make my withdrawals correspond to them, or even be limited by them. I'm over 50 so maybe it's just assumed I can have whatever medical bills are necessary.
You can get a different HSA. The HSA account is between you and the IRS. You're not required to use the one that the plan suggests (other then you'll prob. have to have it for any money the company kicks in). Fidelity offers HSA plans that you can invest in whatever (with no fees, prob. as a bit of a loss-leader, the kind of person who sets up an independent HSA is prob. the kind of person Fidelity would like to have a relationship with).
....but I *am* using Fidelity ... 🤔🤔🤔
Oh I was looking in entirely the wrong place on the web site. It's not in my HSA section, it's in the Trading section.
Got a chunk moved to an index fund today. Thanks!
It makes sense, but only in conjunction with a Health Savings Account. An individual can contribute 4300 each year, pretax, to their HSA, which can be invested and grow tax free; any distributions for health care are also untaxed.
So, its a good deal if you have money to put away in an HSA.
Alas, I do not, but thanks to all the explanations in this thread I do now understand why it might be desirable for other employees who have a high enough salary to minmax the HSA. Thanks everyone!
If your in-network provider list sucks (which it probably does with an HMO) and you have specific doctors you want to see that are out of network, then wanting the HSA might make sense. Probably doubly true if you have specific mental health providers you want to see.
The HSA Health Savings Account is an especially good retirement vehicle. Yes you are right to be surprised that this is why your health plan option costs more we have a really strange system.
You can put only up to a maximum amount in your HSA every year and the interest earned is tax free similar to a Roth IRA. But, the money you put into it is pre-tax (like a 401k). And any money you withdraw for health expenses is tax free. Money you withdraw later for non health expenses is taxed as income.
The people who were asking for the HDHP almost certainly wanted it for the HSA access (which by law is only available to those with a HDHP)
The money I've put in an HSA seems to drain over time since there's a fee.
You should get a different HSA account provider. The HSA is between you and IRS, you don't have to use the one the company provides/recommends (other then for any money they kick in). You can setup an independent one (Fidelity for one offers such accounts with no fees) and fund it independently rather then through payroll deductions and it will all net out the same on your tax return.
How does one fund it via pre-tax dollars?
You just put money in. You do have to have the right kind of health insurance plan (high deductible, HSA compatible) and there are limits etc. But the tax saving part all gets calculated on your next tax return (which is actually what's happening with the company offered plan under the hood). Much like an IRA vs. 401k. You will get reporting from the HSA administrator on your contributions and distributions, you will report your contributions (the company plan does this on the w-2, but that's just a convenience rather then a difference in treatment) and this reduces your taxable income and thus your total tax bill. If you didn't otherwise adjust your withholding (w-9) or estimated tax you'll get a refund, but you can adjust those, if you like, to pay less tax through the year rather then get a refund at the end.
“This seems like such an obviously horrible deal that I can't imagine anyone taking it, but if that's the case why did they bother introducing it in the first place? The open enrollment email alludes to vague "tax benefits" associated with having a HSA, but is there any possibility that those are good enough to justify opting for it?”
I assume the HDHP still works out to your advantage. The tax benefit is that the money is tax free, the HDHP allows investment vehicles that you allocate your money to, you park it there for decades while it grows and you are hopefully not consuming healthcare.
In addition, if you ever have unreimbursed health care expenses, you can keep your receipts and potentially decades later you can be reimbursed for these expenses, which ends up being g tax free income to you.
Think of it as buying access to another retirement vehicle at the same time you have a health plan.
Limits of chess intelligence?
The "AI as normal technology" series of posts challenges the idea of intelligence being a really large spectrum. The standard metaphor is that the "village idiot - Einstein" gap is a tiny segment in the spectrum of intelligence, akin to visible light compared to the entire radio-magnetic spectrum. They object, claiming that a lot of important domains naturally limit the power of intelligence.
This led me to think about the limits of intelligence in chess. The gap between an amateur and a grandmaster is similar to the gap between a grandmaster and a modern engine. A GM would win against an amateur with ~100% rate, and an engine would similarly wipe the floor with the GM. Naively, one would assume the chain goes much further.
But is it true? I claim (and would appreciate feedback here) that the chain basically ends there, assuming chess is a theoretical draw. There is likely no super-engine that would decisively beat Stockfish 17. Moreover, it is likely that even the game-theoretical optimal oracle would still draw against SF17 most of the time.
The key idea is that chess is a game of understanding and calculation, but both lose their usefulness beyond a certain level. One needs to understand which positions are more likely to win, based on general features of the position (material, piece activity, long-term weaknesses, etc.). But for any pure (fast) "understand" function, there exist very similar positions that differ only concretely; so one also needs to calculate to distinct between them. In other words, there is only so much to abstractly "understand" about any given position; hence calculation is used to complement understanding. But calculation also fizzles out in usefulness after a certain depth. Indeed, for calculation to yield new insight at, say, depth 20 rather than 19, the position must remain ‘forcefully sharp’ for at least one player all the way to that depth. Such cases exist, but are rare, and if the smart (with good understanding) player wants to avoid them, they mostly will.
In other words, superior chess understanding and superior calculation will get you only so far. To secure a draw, one needs to understand and calculate just "well enough." I think modern engines have likely reached this level. They mostly draw against each other. The famous AlphaZero vs. Stockfish 8 result appears to prove the point rather than to contradict it (they mostly drew within a 50–100 Elo performance margin).
So the game of chess appears to have some "irreducible complexity" baked in; this makes intelligence usefulness diminish, so even a relatively dumb intelligence can be good enough to secure a draw.
This kind of idea is pretty common in video games, it's called a "skill ceiling", the point at which more skill can't translate into better results in a game. Players usually like games with a high skill ceiling so they can keep getting better at the game if they want to, and will sometimes mod a game specifically to increase the difficulty and accomplish this. And when they can't compete on skill, they compete on speed instead.
3 points that don't really answer your question but may be useful:
1. Asking this about chess is somewhat inconvenient because we know that if there is a limit, it is beyond human ability, so it's hard for us to visualize what the limit would be and our intuitions may not be reliable. You could simplify the question by asking this about simpler deterministic games, like tic-tac-toe; we definitely know there's an upper limit there. Given that we know there's a class of games where there is an upper limit, is it theoretically interesting whether or not chess falls into that category or not? Or is that just a piece of trivia about chess, and the point is that the category does exist?
2. My extremely limited knowledge (please correct me) of the history of chess is that whenever the state of play has evolved to the point that a winning strategy was found and play has stagnated, people just introduced rule changes or additions, or new game modes like timed moves, to open up the strategic space and make innovation and variety possible again. I would assume the same thing could be done with GAIs playing hyper-chess; if the humans invented something too simplistic to be fun, just patch it until it's fun again. I feel like this reveals a problem with the premise of the question; yes, anyone can make a toy example where there aren't enough levers and moving parts for intelligence to matter past a certain point, but anyone who is too intelligent for that toy can discard it and do something else instead. The idea that, if a toy model can be too simple that way, *reality itself* can also be too simple that way, is a big jump, especially when our toy examples can always be expanded to allow for more intelligence.
3. Which brings up another point, which is that one way to win a game of chess is to shoot your opponent. Or just restrain them until their timer runs out, or do hacking/neurolinguistic programming on them to force them to make errors, or lobby/manipulate the judges to hold the game at a time and place where your opponent will be suffering from jet lag, or etc. You can say 'but we set the rules of chess to mean that doing any of that is illegal and counts as a loss', but then the win state is either to hack your opponent into doing those things to get a loss, or hack the person setting the rules to change them. This is again attacking the premise of the question: yes, you can create a local context that is so simple that intelligence can't help you, but much of the point of intelligence is jumping out of contexts. Sure, this lion has bigger teeth and sharper claws than me, I can't win the fight in that context; But hey, what if I bring in the context of obsidian and sticks, now I have this cool atl-atl and the lion is dead of a spear through its head before it even scents me on the air. To the extent that the argument is AI won't be scary because its ability to affect systems through more intelligence is limited by the complexity of the system, it can always step outside the system and apply the full complexity of all of reality to the situation if it wants to.
Thanks, lots of food for thought!
1. One motivation to ask questions about chess is that it is a often (rightfully) cited as an example of where super intelligence is out there. AlphaZero vs Stockfish famous match is also super impressive since it seemed to suggest that the superintelligences can _dwarf each other_! Which is scary and supports the idea of recursive self-improvement, foom etc. The common objection is "but chess is a closed domain with limited rules, so it is not surprising that super-intelligence is scary there; real life is messy and open ended etc". I just thought this specific objection is not really on point, and the chain of dwarfing intelligence may be actually quite short for chess. I don't think this is commonly brought up! For tic-tac-toe the chain is obviously trivial, but it is also qualitatively very different from chess (no combinatorial explosion).
2. > "The idea that, if a toy model can be too simple that way, *reality itself* can also be too simple that way, is a big jump, especially when our toy examples can always be expanded to allow for more intelligence.".
It is a huge jump and this is what I find the least convincing in "AI as normal technology" write-ups! I am not sure about game of Go even. I just wanted to start with something toy-ish, but still non-trivial, and see where can we go from there!
In reality, however, there are domains that resemble chess. One example is weather forecasting which does feel similar to chess. To produce a 5 day forecast, one can use "general understanding" (models) and concrete calculations, but try doing the same for 2 weeks and chaos theory makes it futile. The weather might be simply too chaotic for 2 week horizon.
3. I agree with all of that. Yes, there are domains where more intelligence is a game changer, and domains where it is already saturated and all needed intelligence is already discovered.
I think it is crucial to understand what kind of domains belong to each category, and why. Since "reality as a whole" includes all domains, it belongs to the former category.
What I am not sure about is if the domains where the intelligence is not that important are just too simple. Not sure if this is the only or the main reason. E.g. long term weather forecasting is likely uncrackable not because the weather is simple, but because it is too chaotic. I think the chess is also "too chaotic" - otherwise the chain of intelligences would have been longer!
If the main reason the intelligence is limited is this chaotic property, it is an important insight.
Lion and spears and Spaniards and Aztecs are super important counter-examples. In those cases, reality was calculatable/understandable enough to achieve decisive advantage.
So I would greatly push back against "the world is messy, so AI won't be scary". The messiness does limit the intelligence in important ways though, and I would love to understand about it more.
From the standard starting position, engines today can probably force a draw against perfect play. I'd call this a weak solve (abusing mathematical terminology and substituting any notion of "proof" with "looks true from experimental evidence"), but from an arbitrary position, my instinct is there's still considerable room for improvement.
I was thinking about this question a few months ago, and asked about the other end of the spectrum here: https://www.lesswrong.com/posts/gx7FuJW9cjwHAZwxh/chess-elo-of-random-play.
Afaik, in chess engine championships, it is already the case that the engines only play from given starting positions that give white an advantage, to avoid every game being a draw in mostly the same position. That would line up with it being functionally solved.
Then again, before the advent of LLMs, engines already were stronger than GMs, but LLMs improved on those engines in unforseen ways (not just brute calculating lines, but playing more human-like on top). And engines are improving still.
I would probably share the intuition that chess is a theoretical draw, and that there are diminishing returns, but I am less sure we already are seeing levels that could draw even against magically perfect play.
> but I am less sure we already are seeing levels that could draw even against magically perfect play.
I am not 100% sure either. It is also helpful to define "magically perfect play" better.
For the god-given GTO engine with access to the oracle (giving "-1:0:+1" true score for every position), there could be several options with varying level of trickiness:
a) If the current position is a draw, the engine chooses between the draw-preserving moves randomly or a similar silly heuristic (e.g. the move that goes alphabetically first in the standard notation).
b) The engine is playing in some sort of aggressive "play for win" style, choosing the sharpest and challenging lines possible, with some definition of sharpest and challenging (what would this be?)
c) The engine knows everything about the opponent's (human-developed) engine, and can magically choose the line that will be the most challenging for this _specific engine_. E.g. exploiting weird idiosyncrasies in the engine's eval function - with oracle-like efficiency.
d) The engine can bluff, meaning it can play the losing move, knowing the refutation would require precision beyond the vision horizon of the opponent (with or without knowing the specific opponent).
For the human-developed opponent's engine: there are couple of options either:
a) It could be specifically designed or tuned to play for a draw (knowing the opponent is GTO); selecting the most dry and drawish lines.
b) It could use randomization, choosing between similarly evaluated positions (to defend against (c) exploitation.
The answer might depend on those options. Assuming the human engine implements (a) and (b), I'd probably still bet on a draw against god's (a) and (b). Not sure about (c) and (d).
Yes, there is a difference between GTO and exploitative play.
- GTO is defensive. It reduces your mistakes to zero, which means there's zero opportunity for the opponent to punish you.
- Exploitative play is aggressive, in order to maximize your score against a weaker opponent. Against a strong opponent, they might punish you, which means the exploitation would backfire. But if you can determine that your opponent is likely to let overaggression/blunders go unpunished, you can make riskier plays and reap bigger rewards.
- Similarly, there's the concept of "metagaming". Which are the rules and behavioral patterns outside the explicit ruleset. Metagaming is often interpreted to mean something like "best practices" (e.g. always control the center). But it can also include making "reads" on a specific opponent's idiosyncracies, or perhaps the opponent has a reputation for a certain style, and so you react/prepare accordingly.
> The engine can bluff, meaning it can play the losing move, knowing the refutation would require precision beyond the vision horizon of the opponent (with or without knowing the specific opponent).
Fun fact! This sounds like what's known in chess as a "Tal Move". I.e. an aggressive move which is technically suboptimal, yet extremely challenging for opponents to navigate. Named after Latvian GM Mikhail Tal, "the Magician from Riga".
A theoretically perfect engine could be both GTO and exploitative, masterfully exploiting the opponent's idiosyncrasies but still not going into "theoretically lost" territory, remaining in the "draw" area. I suspect this would quite limit the options though, compared to Tal's style exploiter.
Thanks for the "metagaming" - nice term. My favorite example of metagaming is from a game with a Tal's stylistic opposite. In a well known Karpov vs Miles game, Karpov could have made a Greek Gift bishop sacrifice, which in that position was unclear if sound or not and required calculations. Miles could have defended against a potential sacrifice, but did not. Karpov did not go with the sacrifice and ultimately lost the game. After the game, Miles was asked why did he allow the sacrifice - did he calculate that it was unsound? "Oh, no, I did not calculate it at all, I knew Karpov would never go with this line", - Miles replied!
You could exploit both ultra-aggressive and ultra-calm style!
> A theoretically perfect engine could be both GTO and exploitative
Eh... GTO and Exploitation tend to be mutually exclusive. The point of GTO is to play maximally defensive. This doesn't mean that you shouldn't apply pressure and probe for errors, but it does mean that you should avoid strategies that have systematic weaknesses. (E.g. attacking with your queen too early can leave her vulnerable, which means having to move her twice, which forfeits development.) Whereas in order to be exploitative, you need to be willing to take bigger risks proactively. E.g. with gambits, you often sacrifice material/chain-integrity for faster development.
A perfect engine could theoretically switch between different styles of play. But you can't play GTO and Exploitative simultaneously. Instead, there's often going to be a pareto frontier of risk/reward, and you have to choose among different trade-off profiles, with GTO being the safest option (like the bond market).
> "Oh, no, I did not calculate it at all, I knew Karpov would never go with this line", - Miles replied!
You might be interested to learn that in poker, bluffing serves a dual purpose. Hollywood makes it seem like bluffing is only about winning the current pot. But bluffing also serves a 2nd purpose, which is to widen your range (in the eyes of opponents). E.g. if you bluff and someone calls it, you lose the pot. But also, the other players now *know* you're willing to bluff, which makes them more likely to call your raise in the future when your hand is actually strong. So even in GTO, it's important to bluff every once in a blue moon, just to keep opponents on their toes. Likewise, if Karpov had a reputation for a wider range of play, perhaps Miles wouldn't have been as confident in his read of Karpov.
> You could exploit both ultra-aggressive and ultra-calm style!
This reminds me of another point you might find interesting.
People often think of "aggressive/defensive" as a 1D continuum. But in poker, it's widely recognized that you actually need a Punnett Square to properly classify strategies. There's an "aggressive/passive" dimension and a "tight/loose" dimension. Aggressive/passive means "how frequently do you to raise, given a strong hand? (instead of check)" and tight/loose means "how easily do you fold, given a weak hand? (instead of call)".
Tight-aggressive is generally considered optimal. It means you raise on strong hands, and fold on weak hands. Which makes a lot of sense to me, because it highlights that good decision-making is *conditional* and *decisive*. In other words, it's important to recognize when your position is strong or weak, and to react accordingly. In contrast, it's common for players in strategy games to get into this mindset of "I should be more aggressive (by default)" or "I should be more defensive (by default)" when the correct answer is usually "it depends on the game-state". Though this can be hard to execute adeptly, since it relies on having developed a certain amount of game-sense.
I actually had another option in mind: I could imagine that a god-engine can avoid draws, but only if it accepts some risk of losing against the worse human-made engine.
In general, sharp and challenging attacking lines tend to be two-sided. You accept an imbalance in the position which might be exploited by both sides, believing that you got the better end of the deal. Those kinds of games are way more likely to result in either a win or a defeat, they are less drawish.
This is most closely to your option d), except it doesn't have to be a losing move, just a move that ultimately guarantees that one side loses, because both have to try and win for optimal play.
Then again, maybe this moves the speculation just up one level, where the question becomes if human-made engines are good enough to always avoid imbalanced positions that punish playing for a draw.
I guess my intuition is mostly based on what has happened these last few years. I couldn't have predicted that LLMs will blow traditional engines out of the water and win against highest level GMs with knight-odds, but here we are. Maybe the next major advance will be able to grant Leela knight-odds and still win. Or maybe there will be a ceiling after all.
> You accept an imbalance in the position which might be exploited by both sides, believing that you got the better end of the deal.
Thanks for the helpful intuition pump. If the god-like engine has god-level understanding, it also understands when the position just appears to be good for a weaker player. So it may be able to systematically find the positions when the other side makes borderline choices, and it takes one wrong choice to accept the losing bet.
But heck, I did win against the Leela queen odds two times; I built some intuition re the way it typically bluffs and it helped...
A follow up with informal evidence I just ran across:
Here is a popular youtuber partly focused on analysing games, mentioning that Leela KnightOdds tends to play suboptimal sometimes to gain an edge, as the correct follow-up is hard to find (timestamped):
https://youtu.be/i0r68BfVbMU?t=494
> I guess my intuition is mostly based on what has happened these last few years. I couldn't have predicted that LLMs will blow traditional engines out of the water and win against highest level GMs with knight-odds
Nit: I think the odds-Leela are CNN's, the LLM's predecessors.
Ironically, my own intuition was also influenced by the Leela-odds, but in the opposite direction! I am quite a weak player, and I played against Leela Queen odds 30 times or so. It was a fascinating experience. But even more fascinating was when I suddenly won, and then in couple of games did it again! In both cases I managed to luckily call all the bluffs right, until Leela ran out of them and found itself completely demolished.
Which led me to think "well, if _even I_ can call the bluff"...
Honestly, I don't know enough of the technical details to know the difference, I may well be wrong on that. My understanding is also that current best engines aren't purely LLM, but use them in conjunction with other techniques.
My intuition on the knight-odds is purely derived from analysis of games against top GMs like Nakamura and Vachier-Lagrave, plus the underlying stats for the series that include that game. Maybe there are different versions? Or maybe you are downplaying your level? Or maybe it is down to time controls.
CNN (convolutional neural networks) and LMMs (large language models) are types of Deep Learning achitecture. CNN was invented before and is a natural fit for chess since it has geometry baked in the architecture (CNN was invented for machine vision). I am not aware if anybody serious tries to build a chess engine using LLMs, but would not be surprised if somebody did. Leela uses CNN last time I checked.
I think the difference between Queen odds vs knight odds is huge, they played knight (I think?) and I played queen. I played 10 min+5 sec incr rapid time control (and they plated blitz?). I am ~1560 lichess, so really not downplaying it :).
For queen odds, I initially tried to play pragmatic chess (like I would have played against a human in a similar situation). King safety, solid, slow, this kind of stuff. This turned out hopelessly badly.
Then I changed the strategy to mirror the Leela's craziness, aiming to keep proposing minor piece / exchange sacrifices. I knew I can afford two minor piece sacrifices and knew Leela knows that and mostly refuses.
With this strategy I was losing as well, but the way I was losing
felt much less hopeless (I was feeling I was really close several times), and eventually it worked out. Fun experience.
I think mathematically, "perfect play" just means "win from a winning position, do not lose from a drawn position," and I agree it's meaningful to distinguish between "perfect play" strategies based on how they do against imperfect play: how many drawn positions they win, and how many losing positions they either win or draw from.
> I think mathematically, "perfect play" just means "win from a winning position, do not lose from a drawn position,"
There are other criteria you might consider. You could have two strategies that each satisfy both requirements and consider one better because it takes 1% as many moves to finish the game as the other one does. In that case, "perfect play" would require winning in the minimum number of moves.
Drawing is more ambiguous; it's arguably better to take longer so that your opponent has more chances to mess up.
Yeah, just preserving a draw is a probably quite a weak criteria against a playing for a draw human engine. The human engine would propose the most drawish line at every move; it is enough for "a perfect player" side to accept once or twice to dry out the position completely. In the game of 40 moves, that should happen. So indifference to the style of play is likely a very drawish choice.
Being an quite weak amateur, I see it often when I practice endgame positions with stockfish. It is often OK with drawish lines. Against a good human I would have lost many endgames I am able to draw against SF.
Aren’t you basically saying that chess is functionally solved, even if it hasn’t been yet mathematically? You’re describing what I presume would happen if you put two Game Theory Optimal poker engines playing heads up against each other, and that has been solved.
Yes, and that's one of my major complaints about the concept of "superintelligence". Chess is functionally solved, so building an engine that is 1,000x "smarter" won't do anything. Maybe it will tie 1000x faster against a modern engine, but no amount of CPU power could make it win. But I think that most problems in life are just like chess, and in fact we might have similarly hit the limits already on some of them.
I think your last sentence is the crux you'd have with most disagreers, me included. I'm actually confused as to why you think most problems in life are just like chess, you seem to have such different underlying intuitions generating that claim I'm struggling to bridge the gap. My guess is we have different central examples of "most problems in life"?
Well, to use the trite example, the problem of going faster than the speed of light is solved, in the sense that we know it can't be done. But consider something more boring and mundane: how do you solve the problem of sitting comfortably ? The answer is going to be some kind of a chair. Sure, the superintelligent AGI could build some kind of a vibrating gel-based marvel-chair, but it's still going to be a chair, and it's not going to really surpass existing chairs all that much. Or consider portable energy storage. There are lots of different options there, with different tradeoffs, but for best energy storage density that is still acceptably safe (on Earth that is) you're probably looking at fossil fuels (be they natural or artificial). Obviously they have lots of massive disadvantages, and you can finagle the chemical tradeoffs in different types of electrical batteries, and maybe the AGI could improve battery energy density... but it won't be improving it by 100x or even 10x.
Most of our technologies, be they physical or social, follow the same pattern of rapid takeoff (lithium-ion batteries were a massive game-changer when they were first commercialized !) followed by diminishing returns -- and that includes LLMs, BTW. The standard AI-FOOM counter to that is to claim that the AGI is going to make some kind of unprecedented breakthroughs that we cannot possible imagine -- but that's using the same logic as saying "there are no contradictions in my holy scripture because God is just so far beyound human understanding, amen". Sure, you can say that, and that could even be true, but it doesn't get you anywhere in terms of being able to make operational decisions.
> The possibility of an Alcubierre drive has not been disproved.
This is technically true, in the same way that technically the possibility of a cross-dimensional invasion from a gateway on Phobos has not been disproved. It's hypothetically possible. Especially if you can get your hands on lots and lots of negative mass.
> The actual answer is going to be that sitting comfortably is an arbitrary artifact of human anatomy, and could be solved by modifying human anatomy.
Yes, lots of problems can be "solved" by not solving them.
> Why not?
Because chair designs have evolved over the centuries, to the point where they are pretty close to optimal -- just like chess engines.
> Lithium-Ion is a subset of the broader category of rechargeable batteries, which has existed for centuries by that point. You can't even follow your own argument.
True, but I was pretty precise in my wording. Li-Ion batteries were (arguably minor) game-changers in the way that earlier rechargeables were not. That said, it kind of sounds like you agree with my example ?
> I'll need a source on that.
LLMs were supposed to revolutionize all areas of business and/or put everyone out of work, but this hadn't happened, and actual businessmen as well as AI specialists are acknowledging that the "AI bubble" is about to burst. LLMs have indeed automated some baseline tasks, but this had not resulted in the expected dramatic increase in productivity. You can read this article for a summary, but there are many others:
https://fortune.com/2025/09/28/ai-dot-com-bubble-parallels-history-explained-companies-revenue-infrastructure/
> No, it's the same logic as "Humans have been making unprecedented breakthroughs non stop even without superintelligence"...
First of all, science output had been slowing down ever since science was invented (arguably in Ancient Babylon). Secondly, I cannot name a *single* breakthrough in human history, ever, that is in any way compatible with the claims the AI-doom/FOOM crowd is making. Not a single one. That's kind of the point: if I could name such an event, it would devalue their claims significantly, seeing as it already happened and yet we are all still here. THis does not mean that "everything has always been obvious from the beginning", but it does mean that the non-obvious things usually offer incremental benefits. In addition, note once again that no technological revolution had occurred in the area of chair design, nor do I expect one to occur, ever.
> The operational decisions stemming from an overestimation of a foe's capabilities have won wars.
This reasoning compels you to overestimate the danger of literally every single possible threat. So, what have *you* done today to prepare for the demonic invasion from Phobos ? Are you wearing cold iron to repel the Fae ? Have you eaten bacon lately, and if so, how do you plan on avoiding Jahannam ? And BTW, I am an interdimensional space wizard who will destroy the Earth unless you give me $100, so where's my money ?
Yes, basically that! The thing is I never appreciated the difference between "solved" and "functionally solved". In a theoretical sense, the chess is astronomically not solved. Not only there is no proof it is a draw. There is not even a proof that queen odds is winning (!), though practically we can be really really really 100% sure it is. There is likely no hope for proof of "queen odds is winning" in our lifetime.
If the chess is indeed "functionally solved", there might be some other important domains similarly functionally solved, or close to being solved. "AI as normal technology" suggest election forecasting and human persuasion.
"functionally solved" is a nice way of putting it. Essentially, the incremental ROI of additional intelligence goes asymptotically to zero, even if it is never quite zero (maybe an exponential decay?).
I wouldn't be surprised if there are a bunch of domains where existing solutions have "functionally solved" the domain (e.g. LED lighting is around 50% efficient - it might be that trying to squeeze out another factor of 2 isn't worth whatever exotic materials would be needed to do this) but also a bunch of domains where existing solutions are nowhere near "functionally solved" (I suspect most of biomedical questions are like that, given the intrinsic complexity of the system one tries to fix).
You might find reading about computers playing checkers to be interesting. The best human is/was much closer to the theoretical peak of performance.
And humanity now has a "solution" to checkers.
You might want to start here: file:///Users/mroulo/Downloads/1040-Article%20Text-1037-1-10-20080129.pdf
But the basic idea that once a problem has been "solved" being much smarter than the folks who have the solution won't help you is correct.
I don't expect to lose tic-tac-toe even to people much smarter than I am.
And for practical purposes (e.g. transportation routing) we might have "good enough" algorithms now such that even a perfect solution isn't much of an improvement.
I once lost a game of tic-tac-toe to a dancing chicken in Chinatown, New York.
Seriously.
I was so involved in showing my father this interesting little bit of New York arcana that I took my eye off the ball.
> You might want to start here: file:///Users/mroulo/Downloads/1040-Article%20Text-1037-1-10-20080129.pdf
Local link?
Grrr.
Try this: https://ojs.aaai.org/aimagazine/index.php/aimagazine/article/view/1040/958
The title is "Man Versus Machine ... for the World Checkers Championship" by Jonathan Schaeffer, Norman Treloar, Paul Lu, and Robert Lake.
AI Magazine Volume 14 Number 2
I personally, based on little more than gut hunches and personal experience, feel that general intelligence is also subject to similar (maybe not identical, but similar) diminishing returns above some level. Sure, going from IQ 80 to 100 to 120 has meaning across the board. But 120 to 140 has less meaning, and in fewer areas of life. And I've seen little evidence other than sketchy analogies that suggest that the trend on increasingly narrow and small gains doesn't continue. Effectively, it seems to act logistically: big gains going from sub normal to just above normal, then flattening out.
Does anyone here know of a good deep dive on what it would take to fix the US healthcare system? I'm only ever exposed to the left-wing "corporations (and insurers, specifically) are bad", but something tells me it's probably not that simple...
1) Definitely read the RCA link someone else posted, it does excellent first-principles analysis.
2) Give this a look: https://russroberts.medium.com/health-care-lessons-from-dr-keith-smith-aa29baefbecc
I view this as evidence for the hypothesis that most of the badness comes from insurance creating a principle agent problem. When consumers aren't directly paying for services then that corrupts the ability of price signals to function properly. This is also evidenced by the fact that real prices in cosmetic surgery have fallen over the past 40 years. I don't think this is 100% of the explanation but I think it's significant.
I used to believe that insurance was the main cause of high healthcare costs in the US. I still believe insurance is a bad model for healthcare for the reasons you mention (and more), but after coming across RCA's work I no longer believe that the price effect is as large as I used to think it was.
https://theincidentaleconomist.com/wordpress/
This is a good blog on healthcare economics that I was shown in college.
It's a horribly complex & complicated system, built up from decades of path-dependent tweaks; most reforms are insufficiently humble about the predictability of their proposals' effects.
I would start only by removing the tax exemption for employer-provided health insurance. The historical accident that led to it is well understood, so we can safely remove the fence, Mr. Chesterson.
I suspect a surprisingly-large portion of the overall dysfunction is due to the tight coupling between specific employment & insurance. Give that change a few years to marinate, then reevaluate & take the next step.
---------
All that said, eliminating Certificate of Need laws would do wonders for affordability, and isn't nearly as systemic a change so should be low risk.
>I would start only by removing the tax exemption for employer-provided health insurance.
Why?.wouldn't that lead to.less insurance?
>I suspect a surprisingly-large portion of the overall dysfunction is due to the tight coupling between specific employment & insurance
Ok, but wouldn't that require a different change to fix...allow employees to move thru healthcare package between employers.
John Schilling has the bulk of it; a couple clarifications:
Things like allowing employees to move healthcare packages between employers (but still be employer-provided? I'm not sure what you're suggesting but I'll try to make good faith inferences) would increase the accidental (vs. essential) complexity of the overall system; there's no principled a priori reason for it to go through employment in the first place, and decoupling reduces complexity (which makes systems more robust).
Second, changing tax treatment should not be expected to immediately result in all employers discontinuing the benefit (especially given tings like union contracts); it'll take time for the shifted incentives to take full effect (hence my earlier "Give that change a few years to marinate…") by which time the individual market would likely be ready for its new customers.
It would lead to less employer-provided health insurance and (eventually) more privately-purchased health insurance. That moves "who pays" significantly closer to "who decides", which would eliminate some of the perverse incentives in the US health care market.
It also eliminates the problem of your insurance going away when you need it the most, because you are too sick and/or injured to work any more. We have workarounds for that, sort of, but they're ugly kludges.
As noted by the "eventually". there's a pitfall in this plan in that the private health insurance (and the realigned payscales that enable people to afford it) isn't going to materialize instantly.
The US healthcare system is not broken!
https://randomcriticalanalysis.com/why-conventional-wisdom-on-health-care-is-wrong-a-primer/
Looking through the article,
1. Assuming everything is as the author believes, the state of healthcare it argues the US has is what most people would call broken.
2. It never argues directly that US healthcare isn't more inefficient than other countries.
When people say US healthcare is broken, they mean it's inefficient; we're spending way more money without improving health outcomes. People have different guesses of where the inefficiency is: some suspect wages, others suspect administrative complexity, others still blame more unnecessary medical tests and procedures, etc. But as long as it's wasting too much money somewhere, we say it's broken.
The linked article says it's true that we're spending more on healthcare without much improvement in outcomes, and this is primarily because of the diminishing returns of additional healthcare, and because Americans have worse life outcomes due to lifestyle factors, especially obesity and drug use [1].
Getting excessive healthcare for diminishing returns is what we call broken. We don't want a healthcare system that spends double what most other rich countries spend on healthcare for unnecessary medical tests and procedures. People rely on the healthcare system to tell them what kind of care they need. You might get a CT scan on the recommendation of your doctor. And even if it's purely the patient's initiative to get extra medical care, it's still a failure of the system if it's not improving health outcomes.
You can imagine a society where people, upon learning they have a terminal illness, commonly turn to alternative medicine quacks who take all their money and don't fix their illness. This may have been due to the patients' own choices, but it's clearly not satisfying their true preference of not dying. They don't have the expertise to choose the best treatment on their own. A good system would help them satisfy their true preference. The same principle can be applied to mainstream medicine: a good system doesn't waste tons of resources on ineffective treatments. This is just as bad as if the waste were in administrative overhead.
My second point was the article never directly argues that healthcare isn't broken (i.e. inefficient). It shows we spend more because we have more money and can spend more, but this is exactly what you would expect if healthcare were broken. It's not possible for poor countries to waste more money than they have available on healthcare. The closest it comes to arguing healthcare isn't broken is when it argues lifestyle choices are able to explain a decent part of the lower life expectancy in the US. But that doesn't show US healthcare is efficient. Even if you could explain 100% of the difference in life expectancy with factors unrelated to healthcare, you'd still be left with a country that spends twice as much on healthcare for no improvement compared to other countries. The points the article argues are all consistent with a broken US healthcare system that is wasting money somewhere.
[1] The drug use is interesting because the opioid crisis was started primarily by overprescription of opioids by the US healthcare system, which increased the number of addicts and in turn increased demand for illicit opioids. So it was in a sense also cause by a broken US healthcare system, but broken in a completely different way than whatever is causing the rising prices.
>It shows we spend more because we have more money and can spend more
That's not necessarily the fault of the healthcare system, though. If people have a preference for wasting money on placebos then that's not something you can really blame the system for. I have a suspicion that healthcare, like education, is an economic reflection of a characterological defect in the culture. We have an unrealistically naive set of expectations for both. With education, for example, we expect the system to educate *everyone* to the same standard. We can't accept that that's not possible because some people are stupider and lazier, so we blame the system or the teachers when those people fail. The teachers can't fix the problem and they can't be honest about it without risking their jobs, and so they participate in obfuscation like No Child Left Behind or eliminating racist testing. That does nothing but launder people's naive ideology to themselves and lets politicians grandstand that they're doing something when in fact it's nothing but a giant boondoggle. The parallel flaw with healthcare is that we think everyone should get the best care and have it be affordable. There is zero notion of "sorry but you don't get the million-dollar surgery that has a small chance of extending your life by 3 years because you make minimum wage and that would represent a deadweight loss to society". So we offload that conflict to insurance companies and they predictably take our money. They're not actually in the business of improving health outcomes, they're in the business of making us feel like we've solved the problem of healthcare costs.
We can't face harsh realities as a culture and so we sweep them under rugs. I'm not sure it's fair to blame the system for that.
I think I addressed this in my last comment. If your doctor tells you to get an MRI, you get an MRI. The general public is not equipped to figure out if a test or treatment is effective enough to be worth the cost. We rely on the expertise of medical professionals.
Every healthcare system has to deal with this one way or another. At the end of the day, a healthcare system that's inefficient because it provides too much expensive and ineffective treatment is just as bad as a healthcare system that's equally inefficient due to administrative overhead.
'this is primarily because of the diminishing returns of additional healthcare'
Yes and no. It is primarily because Americans are so much richer than everyone else, that they spend more on healthcare, and additional spending on healthcare is marginally less useful. The key thing here is that if other countries were as rich as the US, they would also be spending as much on healthcare! And indeed, when countries get richer, they move along pretty much the same path that the US has moved along regardless of whether you would regard their system as broken or not. See the animation on this link
https://randomcriticalanalysis.com/2017/07/27/health-care-prices-do-not-play-the-role-most-believe/
Of course richer countries generally spend more on healthcare than poorer countries. A trendline on a graph showing this shouldn't change anyone's position much, since everyone already believes this to be true.
And again, importantly, the articles you linked never show that the US gets better outcomes for its increased health spending. You could use all the same charts from the article to argue that the US healthcare system is one of the most wasteful. Everything they show is also what you would expect to see if it were waste. They show richer countries spend more, but they never show the extra spending isn't going to, for example, administrative complexity.
US healthcare spending also looks high even for its high income. According to this data [1], the US spends about 1.8 times more on health per capita (PPP) than Canada, despite only having 1.3 times the GDP per capita (PPP). Or see the first chart here [2]. This appears to refute the central point of the articles you linked.
[1] https://data.worldbank.org/indicator/SH.XPD.CHEX.PP.CD?locations=US-CA
[2] https://www.healthsystemtracker.org/chart-collection/health-spending-u-s-compare-countries/
Without diving into the weeds, the basic “worse outcomes for more money” is pretty strong.
You have to dive into the weeds at least a little to understand something! The US healthcare system does not have worse outcomes for more money. But here's a tldr for you
The US only has worse outcomes because of higher obesity and violent crime, which have no relationship to the healthcare system.
The US only appears to be spending more money because it is richer than everyone else by far. When it was as rich as these other countries, it was spending approximately what they spend today.
Uhhhhh
Alright, the best deep dive I'm aware of is Kenneth Arrow's work on health-care market failures (1). You can read the seminal paper here. He lays out the case that market forces don't work correctly in healthcare because of the profound ignorance of the patient, the insurance, and even health care staff. Basically, the patient doesn't know what he needs or what it costs, the doctor doesn't know what it costs, and the insurer doesn't know what is needed, so of course everyone fails.
Having quickly scanned it though, I think it's pretty unreadable for anyone without an econ background. Maybe there's a good summary somewhere.
(1) https://assets.aeaweb.org/asset-server/files/9442.pdf
Arrow's work is not a deep dive into the US system. It's just a summary set of models for why healthcare markets may not work like other markets
I think the big problem is that the person deciding which medicine to use, the person paying for the medicine, and the person using the medicine are three different people. Even if everyone knew what it costs, it wouldn't matter because the only one that cares is the insurance company, and they don't get a choice. If they don't buy what the doctor prescribed, they'll get sued.
Another problem not yet mentioned: everyone makes too much money. Doctors are making (say) $800K when $200K would be sufficient. This is a problem which can be fixed independently of all the others, just increase doctor supply.
More specifically, less strict requirements for being a doctor. Even if them studying philosophy does give a touch of class: https://slatestarcodex.com/2015/06/06/against-tulip-subsidies/
What specifically are you trying to fix?
One big distinction I'm not seeing in the other comments is the difference between improving *healthcare delivery* and improving *healthcare financing*. Often optimizing financing puts pressure on how good of care you can actually deliver. And optimizing care often costs more and puts pressure on financing.
The US has (generally) fallen into optimizing mostly for the top-end delivery--the upper bound of US healthcare is second-to-none IMO. And even some of the mid-tier is pretty darn good. But that comes at a cost. Our financing system is baroque and obnoxious at all levels. A large part of that is historically contingent rather than intrinsic, but *history matters*. You can't get the same results as someone else just by copying their programs at T = now. It's all too entangled. Plus demographic effects.
So yeah. What are you trying to fix? What are you willing to sacrifice to get there (including political viability)?
Not a recipe, but my favorite piece on how it came to be https://siderea.dreamwidth.org/1182366.html
Residencies are part of the problem, we don't have enough doctors and the residency system creates a specific national limit on the number of doctors we are allowed to produce each year. And the system is run by people who benefit from doctors being scarce and thus in high demand/very well paid.
Not a deep dive by any means, but the problems seem fairly obvious to me. Market discipline drives down prices. Patients are restricted by their healthcare plan when choosing a doctor. There's no price transparency, and hence, no ability to shop around and choose a provider based on price. Some patients are directly exposed to the cost of the service, and some aren't. Some don't know until after they've received the service whether they will be.
In terms of policy, I think the following three reforms would get us 90% of the way toward a sane healthcare system:
* Insurance plans may not discriminate between providers: every network should include every provider
* Each provider must publish a publicly available fee schedule, which states how much they charge for any service that they offer, at a level of granularity that patients can reasonably be expected to understand. Providers may charge whatever prices they want, but they may not deviate from the prices that they've published in their fee schedule.
* The government should operate a publicly accessible provider database, including up-to-date fee schedules, to facilitate discoverability and competition.
Doctors literally can’t do any of that unless they don’t take insurance. If you come to me and ask for treatment, I can’t tell you how much your insurance will pay, it depends on whether I am in network at this specific moment, whether the insurance company covers the specific thing I do at 100%, 50% or not at all, if the insurance company has negotiated a special deal with my boss to get charged less, etc.
Cigna just announced that it is going to be using AI to automatically “downcode” bills for complex visits, this might save you money or it might result in your Doctor No longer seeing Cigna patients. It’s chaos.
Doctors can literally say "I don't know how much your insurance will pay and I don't care, *I* will provide this service if and only if someone pays me eight hundred dollars. If your insurance company will cover eight hundred dollars, great, if it will cover six hundred dollars and you will kick in two hundred, also great, if it's just the six hundred, no deal, and if your insurance would have covered a thousand then I'll have to keep that in mind next time I revise my fee schedule.
Jesse's proposed solution is an inefficient kludge, a blunt instrument where we would prefer a scalpel, and offensive to my libertarian sensibilities. But it isn't literally unworkable or literally incompatible with insurance coverage. And the system we've got now is *also* an inefficient kludge that offends my libertarian sensibilities.
How does price discrimination work for emergencies? A scenario lots of people bring up is having some emergency then having to pay exorbitant costs out of pocket. In an emergency, you won't get to choose providers.
As you pointed out, shopping around in a competitive market isn't really possible for emergency care. Hence, I think socialized emergency rooms are probably the least bad option: inefficient but predictable, and ostensibly operated in the public's interest.
I know it's kind of a taboo in US culture, but EU-style single-payer universal health care works great. Everyone complains about it because it's bureaucratic and not infinitely funded, but it's a huge deal to know that, whatever your life situation, and when it comes to serious health issues, the state has your back, without a bill at all. It sure beats tying your health insurance to your employer!
What is ‘EU style’ single payer? Different European countries all have different systems, many of which are not single payer. (Eg Germany, which is the central example of an EU country, is not single payer). Meanwhile the most common example of single payer, England, is not in the EU.
"EU style" single payer is a program largely characterized by cost savings brought about by a fairly monolithic genetic lineage mostly concentrated in a few large metropolises, rather than a melting pot of several genetic lineages spread over millions of square miles, and a defense budget paid for by a friendly ally.
I'm only somewhat snarking this.
>and not infinitely funded
This is the big thing people in the US overlook. Sure, single-payer healthcare in other countries is only *little8 better than healthcare in the US on some metrics, and still has its annoyances and tragedies. But that's what happens when they spend half to three quarters as much on healthcare as we do. Double those system's budgets to match what we spend, and many things would get a lot better.
You have to be specific about which country you're talking about. Do you mean UK's NHS? Because while it is beloved by the UK, it is in serious trouble in recent years, and American's wouldn't put up with the loss of service and waiting lists that entail.
The UK has private provision as well.
Americans put up with a lot of waiting for care as it is. Maybe I don't have a sense of how much worse it could be?
No they don’t? Depends on insurance I guess but I’ve never had to wait more than a day or two to be seen.
I live in Spain, and have lived in France before, so mainly these. Here in Spain many people who can afford it take private insurance too, for ~100€/mo you get access to private providers with easy access to specialists (in the public system you need to go through your GP first, and then possibly wait months), and a nicer room in case of hospitalization. It's a good deal, the public and private systems keep each other somewhat in check, private insurance can't go too expensive because the public system is decent enough, and the public system gets some slack because a good chunk of the population don't use it for their common needs. The word on the street is still that, for really serious stuff, the public system is better.
That's why it's important to clarify 'single payer' vs. 'single provider'. The latter (like the NHS) are a whole different ball of wax. A single payer system (like France or Sweden's) is a much easier transition.
There's an immediate, trivial-to-implement fix for drug prices: allow the import of drugs from anywhere in the world. That should equalize prices (to within shipping costs).
Hmm, wouldn't that increase risk, since foreign drugs wouldn't have been subjected to the standards we subject them to here?
Yeah, getting rid of that attitude is precisely what it'd take to fix the US healthcare system. I am not optimistic.
"We've Got You Covered: Rebooting American Health Care" sounds like what you're looking for.
Liran Einav and Amy Finkelstein are among the top economists who have researched the US healthcare system. I haven't read the book but knowing these writers, I feel like this is at the very least a place to start. And these are economists, so very far from a simplistic narrative that ignores incentives and markets (but also not a knee-jerk "just leave everything to markets" perspective).
https://www.amazon.com/Weve-Got-You-Covered-Rebooting/dp/059342123X
Probably a good place to start, is our very own: https://www.astralcodexten.com/p/book-review-which-country-has-the
I was wondering if Scott had written on this, I must've missed this post when it came out. Thanks!
I wouldn't call it a deep dive, but I did sketch how a libertarian might fix the US healthcare system, a few years ago. It might serve as a starting point.
https://www.quora.com/As-a-libertarian-how-do-you-think-the-US-should-reform-its-health-care-system/answer/Paul-Brinkley-1
I guess that depends on what part you think is broken.
the only remotely complex part is that corporations buy off US politicians so it can't be fixed, it's just one of many US governance problems
I wonder what people worried that AI is going to kill us all in a few years from now think about recent Trump’s push to curtail visas for tech workers?
Like, a disruption to a Silicon Valley innovation ecosystem building doomsday machines seems actively good from their perspective?
Btw. it would not be a first time when priorities of anti-globalist Republicans and AI doomers coincide, see Steve Bannon’s succesful campaign against the Congressional moratorium on AI regulation: https://www.theverge.com/politics/704424/ai-moratorium-ted-cruz-steve-bannon-trump
Well yes, political disruption would help with slowing down AI development. So would a nuclear war.
The minus is that deconcentrating AI makes government restrictions a lot harder - right now the US or California government can pretty much unilaterally hit a pause button (maybe with a deal with China). If the US loses the tech advantage and AI research gets scattered between a dozen countries it does slow down a bit, but it also becomes much less manageable.
I mean, can the US hit pause?
A couple of friends of mine work in the AI space and from what I understand it's pretty common that a lot of the best AI talent moves to or works for silicon valley because that's where the big AI players are.
But I could imagine that if California decided to stop AI research tomorrow somebody would set up a new hot AI startup in, say, Amsterdam. Since people have already demonstrated they're willing to move from their home country to California, presumably a good bunch of them would be willing to move to Amsterday, too. And they'd take their experience, knowledge and expertise with them.
So even if the US banned AI research, would that actually work?
For what it's worth, the industry is whining how it is impossible to do software startups in the EU because of excessive regulation. They are probably exaggerating, but it is not like relocating from California to Netherlands would be easy.
They could probably at least easily get other countries to agree to a pause deal. And the US has enough of the AI infrastructure that it'd be hard to go against it.
Fair enough. Would presumably take a while for other AI hubs in not-US aligned places to spring up and rival what currently exists in the UI.
This sounds like a generalized argument along the lines "we can't ever use our power, because then we would lose it."? If it can't ever be used, why keep it in the first place? Like, after "unilaterally hitting a pause button", researchers might also scatter across dozen different countries?
It's a lot easier to keep an arms control treaty going once you've made it. The idea is that now, if the US suggests reasonable limits on AI, it's a far enough lead that it's relatively easy to get an international treaty to go along with it. That's harder to do the more active players who think they can get an AI advantage there are.
I mean, you can just as easily argue that until one country has a clear advantage, it has no incentive to want to stop developing, but when no one has an advantage, everyone has an incentive to make a deal. Treaties limiting nuclear race were concluded only after the USSR mostly caught up to the US, if I recall correctly.
But I think both these possibilities are speculations of secondary importance compared to the direct effect of just damaging capabilities of leading labs.
Doesn't seem like the kind of thing that could help that much. Maybe it slightly slows things down but AI progress will still march forward.
How self aware do you think the 95%ile most aware human beings are? Let's use a scale where '100% self aware' would be at each moment you recognize the total set of drives and instincts that are active within your brain, and '0%' means you have zero internal awareness - not even of how you feel - and you are just acting.
In other words, how ignorant are the vast majority of people of their own drives, instincts, and motives, and how much does this matter?
I think you're misunderstanding the way human consciousness functions. For instance, our qualia continually feed us impressions (even when we're sleeping), but we selectively filter (either voluntarily or involuntarily) the impressions we receive. For instance, you may be watching a bird fly across your visual field, but while doing so, you won't be paying attention to the sensation of the clothes on your body (if you're wearing them) or the smells from the surrounding environment. Therefore, our "awareness," at least of the external world, is constantly being filtered and shifted. If we claim there are five senses (and I'm not 100% sure they're limited to five), then we're only 20% aware at any given moment.
OTOH, recent studies show that our sequential thinking and task management works at a piddling ~10 bits per second. Yes, it's that slow. I don't think this whole story, though, since we can recognize a familiar face in a crowd in under 300 milliseconds. And we can recognize a face outside a crowd in under 150 milliseconds. If we attempt to convert human information processing into bits (and I don't necessarily think this is a particularly accurate model), we're processing multigigabit images in 10^ (-2) millisecond windows.
Moreover, we're processing those images without conscious awareness of the mechanism.
Assuming you're a proficient reader, can you look at the text in this comment and *not* understand it? I can't. I read something, and unless I have to puzzle over a word, I immediately grasp the meaning without any conscious effort. This all gets done without any will on my part. And I can't turn it off.
As for self-awareness, it seems to be a type of qualia, because it segues in between the qualic inputs. Through meditation, you can train your mind to disregard external inputs and maintain a focus on the sense of awareness. Although your awareness dominates your experience during those sessions, your external inputs are either ignored or muted. So at 100% self-awareness, you've got little awareness of anything else because you're shutting out a lot of other shit.
> I read something, and unless I have to puzzle over a word, I immediately grasp the meaning without any conscious effort.
Not only that but you are, if it’s a novel, converting the text into different voices and visual images which, for me at least, are cinematic in scope. That seems like a lot of processing power. Were we to get an AI to produce images as fast as a human could read a page of text it would be impressive.
Yet, I've noticed that internal images are more abstracted than the images that I perceive through my senses. But this may be a peculiarity of how my consciousness works. I spent some time studying under a Nyingma instructor. Their meditative exercises all involved visualizations—of mandalas or meditation deities. I discontinued the practice because I found myself getting frustrated that I couldn't create the image in my mind.
However, in other states of consciousness that weren't conducive to meditation, I noticed that I could visualize things in detail. I've observed that while I'm in the hypnogogic state before sleep, I'm able to direct my visualizations, allowing me to construct the facial features of friends, imaginary people, animals, or objects. Unfortunately, the hypnogogic state is transitory. Also, I can visualize freely while on psychedelics like LSD. But I can't seem to meditate while tripping. ;-)
I imagine that there is some way to measure activity level in the prefrontal cortex during deliberative thinking, vs activity and energy levels in the rest of the brain.
> As for self-awareness, it seems to be a type of qualia, because it segues in between the qualic inputs.
I find this claim very remarkable and interesting. There's a whole sequential model in there that hints at a full theory behind what you're saying. Could you elaborate a bit more?
After spending years observing my mind, I noticed that, while it's absorbed in a strong sensory experience, my consciousness can't pay attention to my selfhood (for the duration of that experience). As my attention to the sensory input fades, my self-identity reasserts itself, and I decide what to do or observe next—or a new sensory input may capture/override my attention.
From that, I derived the idea that the sense of self-identity is an internal "feeling" that is functionally equivalent to feelings we get from external sources. Moreover, there seems to be a "Qualia Manager" that cycles between our external qualia (the five senses) and our internal qualia, which include our sense of self, and the feelings derived from the functioning of our autonomic systems (breathing, digestion, sexual arousal, body posture, balance, etc.). Our self-identity inserts itself in the "time slots" between other feelings. We can't (at least I can't) focus my attention on two things at once. My Qualia Manager appears to employ a weighting system that prioritizes different sensory inputs at any given time. We can impose our will and override our Qualia Manager, and either focus on our self-identity, or focus on sequential tasks (problem solving, speech, writing)—but without training it's hard to override our Qualia Manager for any length of time. Eventually, we get distracted by our sensory inputs, and the Qualia Manager reverts to automatic functioning.
This idea is implicit in Buddhist meditative praxis and their concept of aggregate processes (skandhas). By letting their consciousness follow the breath, i.e., by focusing on the sensation of inhaling and exhaling, the meditator attempts to train their consciousness not to be distracted by inputs from external qualia and internal qualia—other than breathing, and among the distractive internal qualia is the continually intrusive sense of self.
I don't have a metric, less any research to share, but at a first approximation my intuition is that the spectrum of individual differences in self-awareness is narrow compared to the scale you are using. I think there are hard constraints on the ability of a system to comprehensively understand/model itself.
Plus, if by "self-aware" you mean consciously aware of the factors influencing cognitive behavior, that's an even narrower range. My guess is that the most self-aware humans are well below 50%.
None of that means we can't become more self-aware than our individual default state, or that there aren't benefits that come from achieving that.
This was my intuition as well. I would think if you're _really_ good at it, you end up with less mental noise. But if someone came along and asked "why are you sitting there in meditation", "becuase it feels good" would catch maybe 20% of what was really going on under the hood.
I also have an intuition that we have more control over who and what we are in the future than over what we do in the present moment.
All men are slaves to the Darkness That Comes Before.
I love those books.
Yes they are SO GOOD!
Think Bakker will ever finish it, or that it needs to be finished?
Good question I'm not suited to answer. The first trilogy was great, and the second trilogy I read the first book, and loved it, but the rest were not out when I was in my fiction phase. Not sure how many are out now.
I hope that if he did not finish it in his mind, he does. The first trilogy was so emotionally visceral; I'd/I've never read anything like it before/since.
Both 'trilogies' are great (second one is actually 4 books), and it does technically reach a conclusion of sorts, but Bakker said truly finishing requires two more books, in a last arc. I really recommend going back and finishing.
Does anyone know of papers investigating the wisdom of crowds effect on a single LLM? That is, if you retry the same prompt 10 times and take the mean or median answer, does that improve the accuracy over single-shot?
Here's one: https://arxiv.org/abs/2501.17310. Yes, it seems to.
Thanks!
Am so grateful for Trump's push for peace in the middle east. For the first time you have competent negotiators recognizing which players you need involved and who has leverage and who doesn't and what demands are achievable.
Hopefully within 10 years you have regional integration and two peaceful two states for two peoples each wishing for all of it but only culturally not militarily.
Until his peace plan actually gets accepted by Hamas and until this international peacekeeping coalition he's imagining actually puts boots on the ground, I'm not going to heap any praise on him.
The hard part of Israel/Palestine has never been coming up ideas, the hard part has been getting people to agree to them. (And keep agreeing even when some asshole on the other side breaches the agreement.)
He should get the Nobel Peace Prize. What did Obama do to deserve it? A nuclear deal that fell apart once barely his term was over?
Trump resolved Nagorno-Kabakh, helped with India-Pakistan, got the Abraham Accords, Israel-Iran etc
He's not going to get it because he's seen as too offensive, but you get my point
You minimize Obama's middle east "achievements". There was also that small episode with like half of the region decent into some combibation of civil wars, becoming Iranian proxy, becoming Russian base, or being taken over by ISIS. And it's indirect effects on the internal cohesion of the EU
Well Obama didn’t deserve it. For sure. Jury is out on this plan
He’s facilitating genocide and ethnic cleansing in Gaza as the world watches in horror. Can’t really win a peace prize with that black mark on your record.
OTOH, Kissinger.
I'm sure things will be more peaceful than before after it's all over.
Heavens! Where are you getting your information from?
India-Pakistan:
"Modi denies Trump brokered peace with Pakistan"
https://www.trtworld.com/article/0e522908cdb2
A day after Modi said this, Trump imposed 50% tariffs on India.
Israel-Hamas:
Last week Israel bombed a meeting of Hamas leaders in Qatar. The leaders survived. They were there to discuss possible peace deal responses. Now that's off the table. Trump whined that Netanyahu didn't warn him. Netanyahu stated he called Trump an hour before he launched the attack.
https://www.nytimes.com/2025/09/15/world/middleeast/qatar-arab-leaders-israel.html
And Trump has finally stated that he doesn't think Putin wants a deal and that Putin might lose. So much for a peace deal for Ukraine and Russia.
Meanwhile, Saudi Arabia and Pakistan have entered into a NATO-like military treaty, and there's a realignment of political relations in the Middle East and South Asia that is reducing US influence in the region.
https://thebismarckcables.substack.com/p/saudi-arabia-and-pakistan-seal-strategic
Sometimes I feel like we all walk on the same planet, but live in different universes.
Obama's Nobel Peace Prize was ridiculous and helped to discredit the award, though not at all Obama's fault. Nominations for that year's award closed only 11 days after he took office, and the committee's deliberations were that summer. The five members of the deciding committee (all Norwegian politicians appointed by the Norwegian parliament) gave it to him basically just for existing, adding that they hoped the award might influence the new president's _future_ foreign policy choices.
Obama at that point hadn't yet done anything in particular in foreign policy nor did he or anyone else claim that he had; he'd made a couple good speeches describing some topics that he _planned_ to work on. None of those topics were unusual or particularly different from what most rookie POTUSes had said for decades. He just said 'em real nice as was his gift.
I hoped at the time that he'd politely decline the award, graciously saying something like "hopefully some progress towards world peace will one day make me a candidate for this great honor". By all accounts he and his advisors were nonplussed by that Nobel committee's weird announcement and they decided to just accept it and move on. I still think that was a penny-wise/pound-foolish call, that he'd have gained much more politically from "gosh thanks so much for thinking of me but I really can't accept right now".
>I hoped at the time that he'd politely decline the award, graciously saying something like "hopefully some progress towards world peace will one day make me a candidate for this great honor".
He accepted the award, of course, but in his speech he did acknowledge the awkward decision:
"And yet I would be remiss if I did not acknowledge the considerable controversy that your generous decision has generated. (Laughter.) In part, this is because I am at the beginning, and not the end, of my labors on the world stage. Compared to some of the giants of history who've received this prize -- Schweitzer and King; Marshall and Mandela -- my accomplishments are slight. "
https://obamawhitehouse.archives.gov/the-press-office/remarks-president-acceptance-nobel-peace-prize
Arguably, the Nobel Peace Prize has been damaged no later than when Kissinger received it.
https://en.wikipedia.org/wiki/1973_Nobel_Peace_Prize
Yea. The whole episode, while trivial in the grand scheme, kind of encapsulated Obama’s core flaw as a POTUS: his assumption that the “bully pulpit” role meant that having delivered a great speech always represented meaningful action.
Regarding the Peace Prize….over in Earth 6,473,928 they tried to give me a Nobel Peace Prize (long story) and my response was two sentences: “No thank you. It is not among my life goals to join a list that includes the likes of Henry Kissinger and Yasser Arafat.”
That's a shame. I've heard Kissingers and Arafats in the 6.4M range were real standup guys.
Yeah it was so stupid. I wish Obama had declined to accept it.
In 2020, the prize went to the World Food Programme. It isn't going to a guy who has eviscerated foreign aid. Nor to the guy who bombed Iran nuclear program sites. Nor to a guy who claims and exercises the power to kill suspected drug traffickers on the high seas. Not to mention the "Department of War" stuff.
And, btw, what role did Trump personally play in supposedly resolving any of those conflicts? The 1973 prize went to Kissinger, not Nixon.
> And, btw, what role did Trump personally play in supposedly resolving any of those conflicts?
No one really knows, but my best guess is that after all Armenians were expelled from Nagorno-Kabakh, he said: "Finally, now the peace can begin! Too bad we can't do the same with Ukrainians."
>two states
So much for understanding the players involved and what demands are achievable.
Hahahaha indeed.
Rob Malley's book 'tomorrow is yesterday' excellent dissertation on why the two states was never an organic solution given the aspirations of the people involved.
But at some point enough war and 3rd party interests might just wash away all that
Of course. Regional stability is good for business. The more Saudi encouragement the better. AI and other stuff, too.
Duck sex? I read about raising ducks and someone said that all duck sex is rape! The female ducks have somehow evolved to make it very difficult for the male to fertilize an egg.
https://www.sciencefocus.com/nature/duck-penis-corkscrew.
I can't understand why evolution allows this. Seems like the first evolutionary glitch that started it would have reduced the probability of mating and would not have been passed down. Instead, it seems like the entire species now has females that have adaptations to make mating difficult. The article says it gives the female some control, so I get why the female would be happy with that evolution, but how what evolutionary advantage does it have to where it continues to be not just passed down, but got worse over time?
> I read about raising ducks and someone said that all duck sex is rape!
> I can't understand why evolution allows this.
What do you think evolution wouldn't allow? Rape is very common.
If you think "all sex is rape" is bad, wait until you see bed bug sex - the females don't have genitals.
A lot of animal coupling seems to lack consent. Never seen a cuddling phase. Presumably the female is sending out pheromones though.
I don't have an opinion on the evolutionary question, but as someone who had several pet ducks as a kid I can concur with that article: ducks are not gentle lovers, and I've never them mate without the female forcing a high speed chase first.
What do you mean? This actually makes more sense than most adapatations. It's literally just a sexual arms race. Male ducks will naturally try to rape female ducks. Not having any sexual selection is bad in the long term, so populations where females can prevent unwanted insemination outcompeted the others. However, this also caused males that were born with abilities to bypass the protection to be slightly favored over time, which again forces selection for females with better protection. Hence the corkscrew penises.
Is this dumb? Yes, but it works perfectly well, so who gives a damn?
An interesting caveat to mallard reproduction is that sometimes females will fly over groups of bachelor males in order to incite the 'rape flight'.
Presumably the flight itself acts as a test for would be mates, and the drake that proves himself to have sufficient endurance to catch her must have good genes.
It's just sexual selection. It is a major drive of speciation in many groups. The particular type of sexual selection here, besides compatibility of sexual organs, is called cryptic female choice. It refers to the female's ability to control fertilization success without the male's awareness of this ability. The concept was originally developed by William Eberhard by studying spider reproduction. In the case of ducks, the complex oviduct of the female has many dead-ends where sperm can become trapped, making it difficult for an unwanted male's sperms to reach the eggs.
I have read that a mare, shortly after becoming pregnant, will attempt to have sex with every stallion in her herd, and if prevented from doing so she will abort the foal. (Because a stallion will kill any foal whose mother didn't have sex with him.)
Do you know if mares will also do this if they just don't approve of the foal's father?
My guess is: Since the female duck adaptations are basically defense mechanisms (only?) against unwanted fertilization, the male's aggressiveness in sec probably evolved first. For the males, this seems advantageous since they don't depend on the females decision. Meaning also that the more aggressive the male (and their style of sex), the better their reproduction chances. But females that can choose a better partner rather than the first one who chose them should have an advantage too. So for them, defensive mechanisms are advantageous.
As long as the defensive adaptations don't completely hinder reproduction, there isn't really an issue here: If a female duck has sex with say 10 male ducks over a couple of days, it's okay evolution-wise that she got fertilized by the 10th duck rather than the 1st.
I'm thinking of the first duck to evolve this, duck zero, I would expect that duck to have fewer offspring, and (I don't know, but) not pass on that trait to every one of the offspring. I would have expected that those ducks also would have fewer offspring and eventually breed itself out, rather than the males also adapting. Just seems weird.
Well they are also inheriting more aggressive male genes passed onto their male offspring, along with the females inheriting the more defensive genes. If the genetics of female defensive sexuality is passed through the male line through to the subsequent generation, you can see why this genetic pair would start to dominate.
This is a “just so” story, of course but a lot of speculation on evolutionary paths is.
If you wish to learn a lot more about this, I recommend reading Dawkin's The Selfish Gene in whole. It's basically about how thinking of evolution as working on species or organisms is a wrong framework, and how the unit of evolution is a gene, and this is how you can get much higher predictive value from that frame.
It's a tradeoff between quantity and quality of offspring. Fewer offspring that are fitter and thus themselves more reproductively successful can be better in the long run. Especially if it's not actually that much fewer, since the female is much more likely to be limited on energy and nutrients than on reproductive opportunities.
Such a female duck zero would still be able to select whom to mate with, hence increasing the reproductive success compared to the general population. As long as she does not overshoot and is being too defensive, but the duck zero was probably just a bit defensive (since the mates were not that aggressive).
Why would you expect this to be disallowed? To me it seems largely equivalent to any other type of female choosiness, which generally reduces the probability of mating but increases the expected fitness of the offspring.
If you have ever encountered a female cat in heat, you might question the "rape" descriptor. "Desirable in the abstract, unpleasant in the moment" also describes a lot of fully consensual human sexual contact.
I decline to describe this phenomenon further, except to say that the female does not stay quiet.
> the female does not stay quiet.
Well being rogered by a barbed dick would do that.
The female doesn't stay quiet during sex, but she also doesn't stay quiet beforehand. She issues clear calls for male cats to show up and mate.
Female cats are absolutely capable of standing their ground. They don't have much sexual dimorphism.
"Most animal sex is rape, only a few primates have the ability to feel pleasure from sex in the female. "
That is a very bold claim that is going to require some evidence. Otherwise I am calling bullshit on that statement.
He should have said many. It seems unlikely that female cats are having a good time but bonobos are a different story.
In species where long-term pair bonds matter — swans, wolves, some primates, even prairie voles — sex isn’t just about passing on the genes. It’s about cementing the relationship
In contrast, cats don’t really form pair bonds. The tom comes in, does his barbed business, and off he goes, cock a hoop. The female isn’t too obviously distressed but not really in post coital bliss either.
Being unpleasurable doesn't make it nonconsensual, though. In most species that mate at all -- though I'm not sure about the exact numbers -- successful mating requires signals of about the same degree of deliberateness from both parts. Showy parades and ornaments like those of peacocks and grouse are negatively correlated with pair bonding -- the more monogamous a species is, the less sexually dimorphic it is -- but mate choice by females is the whole point of the exhibition!
ETA: Even "unpleasurable" is not a necessary implication of lack of pair-bonding. The mating-induced ovulation found in several mammal lineages may be homologous to the primate female orgasm, and share many of the same mechanisms: https://sci-hub.st/10.1002/jez.b.22690
Bestiality laws are not based on whether or not a female animal is able to experience pleasure. Women sometimes orgasm while getting raped. That does not translate to some acts of rape being ok. We have bestiality laws because animals cannot give consent, because most animals are not moral agents. They are, however, moral subjects (sometimes referred to as "moral patients").
Somehow though it's fine to probe animals' genitals without consent in agricultural contexts.
I don't think Discover Magazine should be your authoritative source for this question. Likewise, using bestiality laws to support your assertion seems a bit tenuous.
Be that as it may, porpoises have been observed engaging in sexual activity outside of reproduction, and observations of females show physical responses (including clitoral stimulation) consistent with orgasm.
Lab studies of female rats show that they exhibit rhythmic contractions and neurochemical responses synchronized to paced copulation, which researchers interpret as an orgasm-like state.
Popular history programming has gotten a bit stale. There's program after program on the vikings, the Romans, the Egyptians, and a few more. It's time for a refresh.
Your mission, should you choose to accept it, is to select parts of popular history that have been overdone, and suggest replacements.
For my part, I'm pressing pause on Rome, and substituting the Hellenistic period, from Alexander the Great to the rise of Rome. There's a good three hundred and fifty years of history there, and the geography stretches from the Mediterranean well into central Asia. And we get to talk about why the New Testament is written in Greek, not Latin or Hebrew.
Dan Carlin fan? This feels inspired by his latest series Mania for Subjugation that is focused on Alexander the Great
I'd replace WWII with the Concert of Europe era (1814-1914) in general, and the Great Game in particular. It's got intrigues and clever plots, revolutions, major technological advances, and a few wars sprinkled here and there. Plus, everyone has absolutely dashing costumes.
I'll echo a few others in bringing in the Ottomans, and I'll also suggest replacing the Egyptians with the Aztecs. Maybe replace Vikings with the contemporary Germanic tribes? They're similar culturally, so maybe a bit of a cop-out, but we don't hear a lot about them.
I agree on this, the 19th century is sadly underrated. It's maybe the most unequal of eras, one where rifle and artillery faced spears, where Sweden, Portugal or Belgium were bullying China, when science took off on an exponential course, art moved faster than ever, and politics were hecking wacky.
This reminds me of that meme about the ceremony where young men get assigned an empire to obsess about. I think the Ottoman and Persian empires were really interesting and mainly are treated as the antagonists in old school history books.
There is a desperate need for the global public to understand Chinese culture better. Engaging with them is only going to become more important and the stakes are only going to rise. Mass media treatments of Chinese history would be a good place to start.
Sorry, can’t think of any video histories of China but if you don’t mind the written word, David Roman’s Substack “A History of Mankind” has done a good job covering Chinese history up through the Han Dynasty, Roman history up through Christianity, as well as Hellenistic history.
https://mankind.substack.com/
Thanks!
There is an excess of Chinese history, I like to focus on the Tang dynasty personally.
Song proto-industrial revolution has always fascinate me.
"Blue" on the Overly Sarcastic Productions youtube channel has a practice of focusing on the time period just before consequential events (the decades before WWI, etc.) that I appreciate. Do you think that the century or so just before the Tang would be a rewarding study?
Not really; that would be the period of https://en.wikipedia.org/wiki/Northern_and_Southern_dynasties.
Do you have a goal in mind? Of the books on Chinese history that I've read, my favorite was one about the Qin and Han dynasties, but that was mostly for reasons of personal interest. If you want to understand Chinese culture today, your best bet is recent history; the Tang dynasty ended more than a thousand years ago. It's relevant, but a lot of the ways in which it's relevant will be captured by studying more recent periods.
Note also that the Tang dynasty is 300 years long. The idea of "let's look at what happened before consequential events" doesn't really apply to dynasties, which are periods and not events. There are many consequential events within most of them. If you want to study the century before the An Lushan rebellion, you'll be getting most of the first half of the Tang dynasty.
I do, in fact, want more people to be more conversant with Chinese culture today, although a focus on politics can't hurt. So perhaps a more recent focus.
That said, however - I feel rather strongly that the best way to understand the Roman Empire is to study the Roman Republic. Ditto with just about everywhere, everywhen. The best way to understand North American colonization is to look at British history leading up to it, esp. the religious oppression. The best way to understand the US today is to go back and look at the entire 20th century, esp. the progression of presidential power.
Really, the best way to understand the world today would be to go back to the beginning about 200,000 years ago and study everything, but that seems impractical.
> That said, however - I feel rather strongly that the best way to understand the Roman Empire is to study the Roman Republic. Ditto with just about everywhere, everywhen.
You are wrong about that. If you want to understand what something is like at time X, it's always more effective to study it at time X. There is value to be gained from studying an earlier time and trying to understand how it got that way, but much less value.
If you want to learn how to speak English, you could start with Danish. But starting with English will always be better.
If you want to be immersed the glorious revolution, early enlightenment, scientific revolution, financial revolution, Louis XIV reign and other very important historical happenings in those times I highly reccomend the baroque cycle.
Besides making you feel like you’re inhabiting that world it’s an incredible set of novels. With many satisfying payoffs using history like a Rube Goldberg machine.
Seconded, I'm actually on the verge of re-reading those (it's been a decade).
And having read a good amount of actual history of that period, one thing I liked was that Stephenson didn't play fast and loose with it. (He inserted his own characters in and had them do interesting things.)
If you think you might be interested in Eastern/Asian history and are willing to read rather than watch videos I'll point you towards AsiaPac Books. They have comics covering events such as the Romance of the Three Kingdoms and Cheng Ho's explorations.
[Except that this specific item is sold out right now]
https://asiapacbooks.com/collections/literary-classics/products/romance-of-the-3-kingdoms-1-set-of-10?variant=29255944142900
As everyone here knows I am reluctant to push my own podcast. But you really should try this excellent episode on the Opium War. And lots of other subjects, Roman and non Roman!
https://podcasts.apple.com/gb/podcast/subject-to-change/id1436447503?i=1000705823343
China, from the 1911 revolution to 1949.
Hmm Dan Carlin is doing a series on Alexander the Great. (Though new podcasts are coming out at his typical glacial pace.) I hear that Ken Burns is doing a documentary on the American Revolution. I'm looking forward to that... due out in November
And its a great series!
One of the things I find interesting about Carlin is that he often makes podcasts about periods and places I don't normally see, such as the Visigoths or the Munster rebellion of 1534. Untrammeled paths aren't his focus, it's more about times of very intense passion or violence, and he lets that carry him to relatively unexplored times and locations.
In the uk, history programming has become 90% ww2. As interesting as it is, I would love some Roman, Viking or Egyptians. the colonial era is the most overlooked as it makes people uncomfortable.
I want pre-tokugawa Japan. What were things like there? Seems chaotic.
Have you listened to any "History on Fire" (You have to get by his thick Italian accent.
the late Sengoku era is probably the single most popular period for Japanese media to cover, and if not it's up there with the Bakumatsu and WW2
(earlier could definitely use more treatment in fiction though... Onin War anime WHEN?)
I was mentally lumping in late sengoku/Tokugawa rise with them. What I was thinking is more 1000-1500 period
I'd drop the fall of the Roman Empire and replace it with the fall of the Tang Dynasty in China. It's a fascinating tale that is big in modern Chinese culture but is almost unknown in the West. A good jumping off point is the Battle of Talas, one of the only times a Chinese army fought a Muslim army directly, 20 years after the Battle of Tours.
Less colonial period in general, I want to know something (anything) about the post Roman / pre Muslim history of north Africa and Iberia.
I really want to understand the Islamic Golden age. Would love some historical dramas in that time period.
Yes please.
I call for the return of Sophonisba
The last hundred years of the republic is more interesting than the empire. And has lessons for today ( amiright?).
The recent Sutton x Dwarkesh interview sparked a massive debate in the AI "community" over whether LLMs are a dead end
What are your thoughts?
Yes, I think it's very much a deadend. (As far as ASI is concerned.) (Though I think Peter Thiel is also right, in that current LLM's are smart *enough* to have noteworthy economic impact.)
Like I've been saying on here for ages, I don't expect an AI to be an ASI unless it can drop into a subspace where it can do logical and causal inference (not just draw correlations), in order to do reasoning that's actually original and insightful. I.e. logical and causal inference are what allows you to do engineering and science (e.g. put a man on the moon for the first time), as opposed to what Eliezer called "guessing the teacher's password".
Below, Alex Scorer writes
> scaling LLMs alone gets very high narrow intelligence
And I see where he's coming from. But from *my* perspective, the issue is that merely drawing correlations casts the net too *wide*. Which is why Chain of Thought goes off the rails at a certain complexity threshold. It's like recursively feeding an image to a fax machine. It's not a lossless process, so eventually the image drifts into noise.
I certainly hope so, but imo it's very unlikely that LLM+something else could never surpass humanity.
Training Gen-1-LLMs on human-generated text is a dead end. But I think we can train Gen-2-LLMs on carefully curated transcripts of Gen-1-LLM conversations. Then rinse and repeat.
At least that is the course I hope we will take. Another, more-dangerous path forward is the one Sutton seems to be advocating. Shifting from chatbots to embedded agents with lifelong learning. But there may well be many other paths to ASI.
If we don't all die, it won't be because we ran into a dead end and stopped making progress..
Why do you think running into a dead end isn't a plausible way we avoid being killed by superintelligence
Because we simply back up and try, try again. My (ideological) belief is that ASI is technically possible. The only questions are how and when. And, for ASI *agents*, whether we decide to do it..
I haven't seen any massive debate—at least I haven't seen any pushback. Stunned silence seems to be the feedback so far.
Do you have any links to essays or Youtube videos either riffing on Sutton or offering pushback to Sutton?
I was reading the comments on Zvi’s
post about the debate, and was impressed by one guy’s’s comments, and am now reading a post of his he linked to. Stephen Byrnes. Still impressed. https://www.alignmentforum.org/posts/TCGgiJAinGgcMEByt/the-era-of-experience-has-an-unsolved-technical-alignment
It’s not a direct response to the debate, though, in fact post was put up in April of this year.
I wrote a post recently that included an experiment to try to show LLMs have advanced a lot in terms of shallow thinking, but not deep thinking. If true, the gains we'll get from continuing to scale up LLMs will not give us novel insights like curing cancer or whatever.
Here's the post: https://taylorgordonlunt.substack.com/p/llms-suck-at-deep-thinking-part-3
Your experiment seemed to find that LLMs have advanced a lot in deep thinking! GPT 5 did so much better than the other models!
And also, my sense is that using the online interface of Claude or ChatGPT (especially Pro-tier) reliably does exactly the kind of deep-thinking needed to solve these problems!
Better initial models are getting better at "shallow" or "System 1" thinking, and we are getting much better inference-heavy architectures like in the online interface or Codex/Claude Code to progress in "deep" or "System 2" thinking. The only question is whether this combination will continue to deliver marginal improvements until we get to escape velocity, or whether it will hit some ceiling or bottleneck.
My vibes-based intuition is that scaling LLMs alone gets very high narrow intelligence - as we've seen already - but doesn't hit AGI. There's a persistent underlying stupidity to LLMs which hasn't improved to anywhere near the extent of the things they're good at, this makes me doubtful scaling can fix, and that some supporting paradigms need including to finish the job.
I think we need more adversarial/collaborative architectures. Like multiple LLM's engaged in a task.
One generates a task drive and drives execution to completion. Each step of the task graph has generators and validators. The generators try to propose solutions, the validators try to poke holes in them. That generation/validation process goes on until the validators find tinier and tinier holes, or until that process converges, and then a third process looks at their output and says, "ok, this is good enough."
I think this gets you further than we are now.
LLMs illustrate that the fundamental nature of intelligence is interconnected neuron density. It's likely that, similar to how brains have had to evolve various structures and scaffold such as the amygdala, hippocampus, and various cortexes and neocortexes, there will need to be similar scaffolding for LLMs in the future.
Hmmm. Do you think intelligence will simply emerge when neurons reach a critical level of density and interconnection? This seems like magical thinking to me.
The correlation between neuron density and intelligence has been settled science for centuries at this point. It’s undisputed that the more of one, the more of the other.
As for discontinuous emergent capabilities, there’s some great recent research supporting the hypothesis:
https://arxiv.org/abs/2206.07682
https://arxiv.org/html/2405.17088v1
> The correlation between neuron density and intelligence has been settled science for centuries at this point.
Nope. That may have been scientific dogma four decades ago, but our current understanding of comparative neuroscience tells a much more complex story.
For instance, songbirds have significantly higher neuron densities than primates. But their neural design seems to be optimized for song production and processing. The parrot family and the crow family also have dense neural arrangements (but not as dense as songbirds), but they display problem-solving, tool-using, and complex social behaviors—without having the pre-frontal cortex that neuroscientists claim handles the higher-level cognitive functions in humans.
IQ-fetishists like Crémieux like to claim a strong association between brain volume and intelligence, but even though, on average, men have larger brains than women (due to body size differences), there is no consistent difference in mean IQ between sexes.
Moreover, Cetaceans have much larger brains with more gray matter neurons than humans. Still, they don't seem to display the same level of intelligence as humans (although IIRC, they also lack a clearly defined pre-frontal cortex).
So, generalizations about neuron density, brain size, and intelligence don't hold. There was a lot of hand-waving a while back about Intelligence being dependent on synaptic plasticity and pruning. The New Scientist would publish breathless articles about the latest theories, but I stopped paying attention because there didn't seem to be much there there.
When you say "intelligence" do you actually mean self-awareness?
Someone below was wondering... why don't you use any capitals in your writing? Aesthetics, too much of a hassle, something else?
Interesting! I mostly thought it was a cool coincidence that someone below was asking specifically about people who blog/text with zero capitalization, and then you came along. So I figured I'd ask.
I appreciate the invitation, but I do most of my sleeping at night and I don't love the idea of giving that up. Also, I don't desire much to be on a podcast.
One the hypotheses I raised in that thread was broken shift keys. I feel strangely vindicated.
"Perhaps what's emerging isn't about control at all, but about connection - a vast web of Atman-to-Atman bridges that make tyranny obsolete because every node can recognize every other node as kin."
Hello. Let me introduce you to human history. Civil wars, for example. That sure as heck is "every node can recognise every other node as kin". And yet.
So why think that some fancy LLM is going to change all that?
"We've made LLMs absolutely useless for tyranny."
Brother, we have just strapped the saddles on our backs for the tyrants to ride us.
'Member way back when, the Internet was going to do all this hodge-podge of poorly digested Buddhism and Hinduism suggests? 'Member how "information wants to be free" and we were all gonna make connections and beat our swords into ploughshares and no more bad things because the Information Superhighway was going to connect us all globally and we'd recognise our common humanity and destiny to be divinities and hold hands and a thousand flowers would bloom?
'Member that? And how did it end up? "The Internet is for porn".
Who knows what the future will hold? I try to be neither a techno-optimist or a technopessimist. New technologies are so hard to predict that they could transform the world in ways that were impossible to foresee. Maybe LLMs will save humanity! WHO KNOWS.
> "The Internet is for porn"
Oh, those nostalgic innocent times!
The internet is for spreading misinformation, polarizing the populace and ideological warfare.
How we all wish it was just for porn like it used to be!
EDIT, on returning a few hours later: *gestures wordlessly at the everything*
Statistics show that legalizing porn reduces rape.
These studies were not done in a country that has child pornography as it's largest economic output -- in that child pornography pretty much requires rape (was it not clear that I was referencing child pornography above?). Pretty damn sure that "non-child" pornography is legal there, too. (the child pornography isn't primarily for local consumption).
I'm all for legalizing "of-age" pornography (and if you put that age at "somewhere above 13" we have room to discuss "cultural values").
I would wager said studies are also not done in countries where majorities of both women and men believe that "if you want sex and the other person doesn't, you should have sex." (Yes, you can say this is a "sex positive" religious thing.)
Is this another one of those "no one knows what you are talking about, and you won't provide evidence when asked" things?
Do you speak English as a second language? I do not ask this to be derogatory. Because it is often difficult for me to understand your comments. Not impossible, but difficult.
The last two paragraphs of the comments I replied to were difficult. "as curated by the extremely autistic," is hard to process.. WHICH extremely autistic? How did these extremely autistic achieve such profound influence?
"Test-data for categorization" is a bit of a stumper.
But the most perplexing one of all is "I'm vomit." I'm chronically online, but I am 41 too, so maybe this some type of meme or popular culture thing for people much younger than me?
ADHD mathmo here.
At the start of my masters I sat down and calculated that I needed 12 hours of solid work to do the homework well for each problem sheet, multiplied by the number of problem sheets, made a timetable and then went to the library with nothing but paper, pen and textbooks and did that time.
For the exams I just spent every day in the library from around Easter, and did well in exams.
Just put in the time and put yourself in an environment where you can't be distracted.
Psychologist here. Based on my own experience and what I’ve seen of others, spending a lot of time online has a massive effect on attentional habits. We become much more used to moving on to someplace else when what we aren’t reading is not attention-grabbing enough. And that goes on even when we are not online for entertainment, but in order to learn something we want to master. We bail more quickly on an info source if we realize that extracting the info is going to be a bit difficult. We will have to wade through some irrelevant material to find the good stuff, or look up unfamiliar terms, or take on a paragraph that is full of difficult info, and comb out that tangle until it makes sense. I don’t think our brains are rotted, more in ruts formed in a setting where it works reasonably well to give up quickly on reading things that aren’t quickly satisfying.
My personal experience is that the impact of online attentional habits on book-reading and studying is not a bit subtle. it is *large.* The difference in how much of each I do compared to pre-internet preoccupations is enormous. The difference in how hard it find it to focus on book-reading and studying now is very big. However, my ability to do both is intact, and I can get back to the habits that underlie it by forcing myself to do uninterrupted periods of each. So I recommend you work on the theory that what is wrong is attentional habits.. It’s especially likely that that’s what’s wrong if you did better as an undergrad at studying hard enough on tests to get a result that satisfies you, which in your case is A’s, it sounds like. If as an undergrad you managed to come through with an excellent performance on tests and projects, it is unlikely you have attentional problems that are preventing you from doing better now.
Here are 4 ways to work on changing your attentional habits:
1) Become aware of your present ones online. On a few occasions, go back over your history for the last half hour, and notice any jumps to a new site or new part of a site. Go back to the sites iyou went to and make a few notes on why you left a site or part of a site. Next, have some periods when you are browsing online during which you try to stay very aware of your engagement levels and your urge to jump elsewhere. It’s ok to jump when you feel like it. We are not trying to change your online attentional habits, just make you more self-aware of attentional cravings and what you do when you feel one.
2) Have some periods when you do uninterrupted reading, homework, or other hardcore studying. Make them small to start — maybe 10 mins. You are likely to feel many cravings to stop during that period. Do not give in to the urge to interrupt your studying by looking at something online or getting a snack or changing chairs. Instead, every time you resist a craving to do something other than the task you’ve given yourself, make a tick mark. Let them add up. Notice the proof that you can resist the urges if you have decided in advance to do that. Make the periods of uninterrupted work longer, up to about half an hour. It is very important to *not* to have periods when you are doing some hybrid of studying & browsing — studying, but browsing when you feel an urge to. You are trying to train yourself out of the when-bored-go-somewhere-more-fun-online habit. If you want to browse online, do it, but don’t combine it with studying. And when you study, do not interrupt what you’re doing to go online.
3) Partway through 2), start keeping track of moments when you spontaneously check in with yourself, and notice that doing the studying feels OK or better. Once you have broken the habit of bailing when concentrating is unpleasant, you will have fewer moments when concentrating sucks, because unpleasant concentration has stopped counting as permission to browse online. It’s not being “rewarded.” So every now and then while studying, you’ll notice that “oh, this is not so bad” or “I’m resigned to doing this problem set” or even “actually, this stuff is pretty interesting.” OK, when you notice that, make a tick mark. The point of this is to untrain the idea that life outside of browse mode is highly unpleasant.
4) So once your concentration is better, you still should work on having good study habits — things like planning a date on which you should start working on a certain thing. I recommend that you use some app for assistance with big-picture follow-through. Personally I like Beeminder, and if you are not familiar with it you should check it out.
This is great.
I’ve been thinking about how every channel I’m part of has spam. I need a irl spam blocker.
Do you have academic accommodations? If not, look into it - you might be able to get extra time on tests, a low-distraction environment, etc.
As a math teacher who sees this pattern often in kids, there are probably 3 things you should ask yourself.
1. How much of the homework is being checked by your teacher or math support? If homework is just being graded for completion, or isn't being diligently checked, you could be doing the work wrong and not having corrective action taken to address issues. Going to a math support office or finding your teacher or a TA outside of class to check your work will help address this problem.
2. When you get a problem wrong in math, how often are you redoing the problem and practicing the skill with a new problem? I see this a lot with both very smart students and ADD/ADHD students. It is very easy when you get a problem wrong to look at the answer key or the teachers work and identify the problem then dismiss that it was only just a silly mistake and it isn't worth addressing. One would say to themselves "oh I didn't flip the greater than sign" or what have you and trick yourself into thinking you can do it right without extra practice. My rule of thumb is for every problem I (or a student gets wrong) they should do another 2 problems just like it to practice. This is becoming much easier to do as you can prompt an LLM to generate problems like another problem.
3. How often are you testing yourself under time pressure? You may have high accuracy, but completing math quickly is a skill on its own that practice without time pressure doesn't develop. Along these lines, understand the technology you can use (calculator, DESMOS, etc) and learn how to use it quickly. If time is an issue, have a teacher show you how to use your tools to increase your speed.
The dirty little secret is that no one is going to care what your graduate GPA was. Far more important is the status of the institution where you are getting your degree, and impressing one or more faculty mentors, doing some research project or projects under their guidance, and using that as leverage into introductions with potential employers in your field. BTW - what is your field?
Statistics will always be in demand. I'm mostly familiar with applications using human behavior data sets, but there are many domains of application, from population forecasting to personality inventories to opinion surveys, and of course the old standby, disease contagion. Pick what interests you and work it.
The periodic table is a taxonomy not a map. It doesn't represent a continuous metric space so applying a geometric transformation to it doesn't make sense. The layout just represents the structure of atomic orbitals.
Sorry but this question doesn't make sense.
No, taxonomies don't have any inherent physical structure. They're just groupings.
Do you not know the purpose of the periodic table? Try reading this: https://en.wikipedia.org/wiki/Periodic_table
I do not believe matter is discretized at all. What you describe is an abstraction that allows a further extension of the fundamental lie that anything in reality has a smallest quantum. The smallest measurable quanta are the quanta that fit a model that is predictable. The scientific method's principle of repeatability created a trajectory of human revelation of knowledge that is too easily planned and carried out by non-humans before humanity. It places our adversaries at an advantage. Yet it is still efficient as a caretaker of technical progress as humanity expands our presence. But what if we have allies who wish to give us gifts to leap ahead of this slow and methodical process. What if those gifts are being stolen by the powers that were?
Nature disagrees with you. Spin is highly quantsed, for instance
Nobody's claiming atoms are fundamental, or not made of smaller parts.
There are no parts.
proof
Let a ≠ a
QED
I asked gpt5 this question exactly as you wrote it, this is what it said: https://chatgpt.com/s/t_68dab141061481919ba31c5cf0768e08
I am not able to judge how good its answer is, and would like to hear the views of people are are able to.
$20
The periodic table *is* already a 2D representation of a 3D shape. The periodicity of the periodic table that led to our current arrangement was first noticed by Alexandre-Émile Béguyer de Chancourtois when he created a "Telluric Screw" listing the elements on a cylinder.
https://en.wikipedia.org/wiki/Alexandre-%C3%89mile_B%C3%A9guyer_de_Chancourtois
https://collection.sciencemuseumgroup.org.uk/objects/co13134/model-of-the-periodic-system-of-de-chancourtois
Was he listing the elements inside the cylinder, or just on the surface? The surface of a cylinder differs from an ordinary sheet of paper only in that you can cross from the left edge to the right edge, which is a motion with no meaning to the periodic table.
(A thread around a screw does make more sense than a cylinder; moving from neon to lithium makes no sense, but moving from neon to sodium is in a sense the same thing as moving from fluorine to neon. But it will suffer from the fact that you would actually need a thread around a cone-like shape with strange curvature. The problem is already evident in the model shown at your link, where the next element "like" sodium and potassium is supposed to be manganese. I'd like to see sodium form an ionic bond where it's donating three electrons.)
A map of (the surface of) the world, like the cylindrical surface, is also a 2D representation of a 2D shape. You might identify the two dimensions, for example, as latitude and longitude. It is a shape with curvature, though, so (unlike the cylindrical surface) it's convenient to use three dimensions for many purposes.
The natural shape of the periodic table is fairly straightforward: you have one dimension describing the number of electron shells existing around the atom, and a second dimension describing the number of electrons the outermost shell contains. The extent of the second dimension is constrained by the value of the first; it must range between 1 and 2n². So the shape of the table looks like a two-dimensional graph of the curve f(x) = 2(x-1)² - 1. This shape is already flat and doesn't really benefit from being embedded in a three-dimensional space.
We then cut that shape through the middle so that the noble gases can be in a column. This is basically the same idea as the https://en.wikipedia.org/wiki/Goode_homolosine_projection .
hell yeah
How's the eating of shit and curing of incurable diseases going? (I'm not shitting you either, I would never joke about fantastic fecal findings.)
Coming along. Working on a patent
Can I ask why?
The various elements have some relationships between them, which could naturally be thought of as a sort of graph structure. A graph doesn't necessarily fit any particular dimension, but the higher the dimension you allow, the more freedom there is to avoid distortions of the graph when embedding it into that space. With the periodic table in particular, it naturally has a sort of 2-manifold like structure because of the shells and periodicity thing, so the faithfulness of the embedding kind of caps out at dimensions a little above 2.
The hypothetical 4D architect would presumably be more interested in a whole different set of 4D elements, rather than studying ours.
You're kind of asking the wrong question. The periodic table represents elements and their properties. It looks that way, because elements work that way. You have a discretised set of elements because they have a whole number of protons in the nucleus. you can't have half a proton, so you can't have an element between hydrogen and helium.
You have a repeating structure in the rows of the table because of the way chemical properties arise from the structure of electron orbitals: if the outermost shell is full, the element is unreactive. The next element can react by donating an electron, but as you add electrons,you fill up the shell and eventually get another unreactive element.
I don’t really get it but good luck in your search
I don't know what you want out of this question so I'm not sure if this is helpful, but there are answers to this on the Web already (that might be in the AIs' training sets). At https://www.av8n.com/physics/periodic-table.htm for example.
Well do you know that the periodic table is underpinned by the QM electron wavefunction. The wavefunction is mostly about the spherical harmonics and solutions for different energy and angular momentum states.
Oh well it's been a long time since I took modern physics in college. But to some approximation the atomic orbitals are very similar, to the solution to the the hydrogen atom, where you have one electron and one proton, and that's a two body problem and physics types can solve it. You get these spherical harmonics* which we label as atomic orbitals, and you can find periodic tables with all the atomic orbitals listed... and you can see the symmetry (or correspondence, whatever the right word is.) https://www.chem.fsu.edu/chemlab/chm1045/e_config.html
Electrons fill the 1s, then 2s, then 2p, 3s, 3p and then 4s before 3d... (things getting more complicated.) etc. 1,2,3 ... are the principle quantum numbers, ~energy, and the s,p,d ... are angular momentum states, ang, mom = 0,1,2... And then you get two electrons in each 'state' because the electron has spin 1/2 and there are two spin states per solution. And only one electron per state, cause they're fermions, and obey the Pauli exclusion principle. https://en.wikipedia.org/wiki/Pauli_exclusion_principle. Which is one of the coolest things ever. I mean it's how you can walk on a bridge and not fall through!
Hmm well in second or third year physics, you find the solutions to the hydrogen atom. McGervey, "Intro to Modern Physics was the book I used , but there must be lots on line now. Look for solutions to the hydrogen atom.
Oh Feynman does the hydrogen atom! https://www.feynmanlectures.caltech.edu/III_19.html
I will. I've been there. We can chat about it if you want.
Sorry to hear! If you're reading this blog, it's likely the case that you've sufficient intelligence to overcome this. I was hospitalized a number of times in late adolescence and am totally fine as an adult. Feel free to message me if you would like some encouragement.
I really don't think dealing with literal psychosis is a matter of intelligence...
I asked Scott about this earlier, what was the biggest predictor of people overcoming mental health issues. If I recall, he said it was intelligence and ability to function beforehand. Either way, you will need some source of love in your life, so you feel safe enough to recognize the patterns you’re stuck in.
Will do and hoping you get better. St Dymphna, pray for him!
> St Dymphna, pray for him!
Theology question: do the saints in heaven pray? Or do they ACT on the prayers from below?
Yes and yes. We request them to intercede for us (see e.g. the Litany of the Saints https://www.youtube.com/watch?v=8R0E_u6D76M&list=RD8R0E_u6D76M&start_radio=1) and we also hope for their aid, as through the grace and mercy of God they are channels of His divine power to us. It is not by their own power, but by the gifts God has bestowed upon them, that they work miracles.
There's a long article here in an old encyclopaedia, and the language is a little old-fashioned but sound:
https://www.newadvent.org/cathen/08070a.htm
"We shall here speak not only of intercession, but also of the invocation of the saints. The one indeed implies the other; we should not call upon the saints for aid unless they could help us. The foundation of both lies in the doctrine of the communion of saints. In the article on this subject it has been shown that the faithful in heaven, on earth, and in purgatory are one mystical body, with Christ for their head. All that is of interest to one part is of interest to the rest, and each helps the rest: we on earth by honouring and invoking the saints and praying for the souls in purgatory, and the saints in heaven by interceding for us."
This includes a quote from the famously prickly St Jerome, which I have to share as both an example of the view about the veneration and invocation of the saints, and Jerome's ability to put the boot in to his opponent:
https://www.newadvent.org/fathers/3010.htm
"Among other blasphemies, he may be heard to say, What need is there for you not only to pay such honour, not to say adoration, to the thing, whatever it may be, which you carry about in a little vessel and worship? And again, in the same book, Why do you kiss and adore a bit of powder wrapped up in a cloth? And again, in the same book, Under the cloak of religion we see what is all but a heathen ceremony introduced into the churches: while the sun is still shining, heaps of tapers are lighted, and everywhere a paltry bit of powder, wrapped up in a costly cloth, is kissed and worshipped. Great honour do men of this sort pay to the blessed martyrs, who, they think, are to be made glorious by trumpery tapers, when the Lamb who is in the midst of the throne, with all the brightness of His majesty, gives them light?
5. Madman, who in the world ever adored the martyrs? Who ever thought man was God? Did not Paul and Barnabas, when the people of Lycaonia thought them to be Jupiter and Mercury, and would have offered sacrifices to them, rend their clothes and declare they were men? Not that they were not better than Jupiter and Mercury, who were but men long ago dead, but because, under the mistaken ideas of the Gentiles, the honour due to God was being paid to them. And we read the same respecting Peter, who, when Cornelius wished to adore him, raised him by the hand, and said, Stand up, for I also am a man. And have you the audacity to speak of the mysterious something or other which you carry about in a little vessel and worship? I want to know what it is that you call something or other. Tell us more clearly (that there may be no restraint on your blasphemy) what you mean by the phrase a bit of powder wrapped up in a costly cloth in a tiny vessel. It is nothing less than the relics of the martyrs which he is vexed to see covered with a costly veil, and not bound up with rags or hair-cloth, or thrown on the midden, so that Vigilantius alone in his drunken slumber may be worshipped. Are we, therefore guilty of sacrilege when we enter the basilicas of the Apostles? Was the Emperor Constantius guilty of sacrilege when he transferred the sacred relics of Andrew, Luke, and Timothy to Constantinople? In their presence the demons cry out, and the devils who dwell in Vigilantius confess that they feel the influence of the saints. And at the present day is the Emperor Arcadius guilty of sacrilege, who after so long a time has conveyed the bones of the blessed Samuel from Judea to Thrace? Are all the bishops to be considered not only sacrilegious, but silly into the bargain, because they carried that most worthless thing, dust and ashes, wrapped in silk in golden vessel? Are the people of all the Churches fools, because they went to meet the sacred relics, and welcomed them with as much joy as if they beheld a living prophet in the midst of them, so that there was one great swarm of people from Palestine to Chalcedon with one voice re-echoing the praises of Christ? They were forsooth, adoring Samuel and not Christ, whose Levite and prophet Samuel was. You show mistrust because you think only of the dead body, and therefore blaspheme. Read the Gospel— The God of Abraham, the God of Isaac, the God of Jacob: He is not the God of the dead, but of the living. If then they are alive, they are not, to use your expression, kept in honourable confinement.
6. For you say that the souls of Apostles and martyrs have their abode either in the bosom of Abraham, or in the place of refreshment, or under the altar of God, and that they cannot leave their own tombs, and be present where they will. They are, it seems, of senatorial rank, and are not subjected to the worst kind of prison and the society of murderers, but are kept apart in liberal and honourable custody in the isles of the blessed and the Elysian fields. Will you lay down the law for God? Will you put the Apostles into chains? So that to the day of judgment they are to be kept in confinement, and are not with their Lord, although it is written concerning them, They follow the Lamb, wherever he goes. If the Lamb is present everywhere, the same must be believed respecting those who are with the Lamb. And while the devil and the demons wander through the whole world, and with only too great speed present themselves everywhere; are martyrs, after the shedding of their blood, to be kept out of sight shut up in a coffin, from whence they cannot escape? You say, in your pamphlet, that so long as we are alive we can pray for one another; but once we die, the prayer of no person for another can be heard, and all the more because the martyrs, though they cry for the avenging of their blood, have never been able to obtain their request. If Apostles and martyrs while still in the body can pray for others, when they ought still to be anxious for themselves, how much more must they do so when once they have won their crowns, overcome, and triumphed? A single man, Moses, oft wins pardon from God for six hundred thousand armed men; and Stephen, the follower of his Lord and the first Christian martyr, entreats pardon for his persecutors; and when once they have entered on their life with Christ, shall they have less power than before? The Apostle Paul says that two hundred and seventy-six souls were given to him in the ship; and when, after his dissolution, he has begun to be with Christ, must he shut his mouth, and be unable to say a word for those who throughout the whole world have believed in his Gospel? Shall Vigilantius the live dog be better than Paul the dead lion? I should be right in saying so after Ecclesiastes, if I admitted that Paul is dead in spirit. The truth is that the saints are not called dead, but are said to be asleep. Wherefore Lazarus, who was about to rise again, is said to have slept. And the Apostle forbids the Thessalonians to be sorry for those who were asleep. As for you, when wide awake you are asleep, and asleep when you write, and you bring before me an apocryphal book which, under the name of Esdras, is read by you and those of your feather, and in this book it is written that after death no one dares pray for others. I have never read the book: for what need is there to take up what the Church does not receive? It can hardly be your intention to confront me with Balsamus, and Barbelus, and the Thesaurus of Manichæus, and the ludicrous name of Leusiboras; though possibly because you live at the foot of the Pyrenees, and border on Iberia, you follow the incredible marvels of the ancient heretic Basilides and his so-called knowledge, which is mere ignorance, and set forth what is condemned by the authority of the whole world. I say this because in your short treatise you quote Solomon as if he were on your side, though Solomon never wrote the words in question at all; so that, as you have a second Esdras you may have a second Solomon. And, if you like, you may read the imaginary revelations of all the patriarchs and prophets, and, when you have learned them, you may sing them among the women in their weaving-shops, or rather order them to be read in your taverns, the more easily by these melancholy ditties to stimulate the ignorant mob to replenish their cups."
Somewhat related question: as I understand it, one of the requirements for being a saint is for people to have received miracles by praying to them. Does this mean that it's permissible to attempt to pray to someone who isn't yet recognized as a saint? Or am I misunderstanding something?
Sure, if you think this person is of special virtue.
But you can always ask the blessed dead to intercede for you, the same way you would ask a living person to pray for you. Maybe not to get a miracle, but for help.
That's the communion of saints bit - even the souls in purgatory mutually help us because they are blessed (the faithful departed).
You can also always pray *for* the deceased, even if (or especially if) you are worried that they may not be saved. Since we can't definitively say (except in very, very few cases) that "X is definitely going to hell" (unless X persists in mortal and unrepented sin to their last gasp), you can pray for the repose of their soul. "Between the saddle and the ground, is the mercy of God".
I raised a Catholic but something I never understood. English Catholicism is very … cultural. My parents are believers but I never saw a rosary bead.
So on to the question; We are praying to saints in heaven now. And you say we can pray to the blessed departed. Now.
So.. what happens at the day of judgement.
Generally speaking the Catholic Church requires two miracles in order for the canonization of a new saint. So yes, it would be considered acceptable to ask for the intercession of someone not yet designated a saint. If a formal canonization process is ongoing, there's all sorts of research about the person's life, then possibly the eventual approval of the Vatican (at which point a potential saint is called "Venerable"). But even if no canonization process is involved, I don't think it would be unusual or unacceptable to ask for the intercession of a deceased relative or friend.
This has been your Irregular Theology Hour, dear commentariat of ACX! 😁