How to Recover Stolen Cryptocurrency: A Step-by-Step Guide
Losing cryptocurrency to a scam or hack can be devastating. Many victims immediately search for ways to recover stolen cryptocurrency, hoping to reverse the damage. While recovery is never guaranteed, you can take important steps to improve your chances and avoid falling for a second scam.
1. Document Everything Immediately
Act fast and gather evidence:
•Save transaction IDs (TXIDs) and wallet addresses.
•Take screenshots of websites, emails, or chats with scammers.
•Record the date, time, and exact amount stolen.
This documentation will be critical for exchanges, law enforcement, or blockchain forensics.
2. Report the Theft to Your Exchange or Wallet Provider
If your stolen funds passed through a centralized exchange (like Binance, Coinbase, or Kraken):
•File a report through their official support pages (avoid search ads).
•Provide transaction details and evidence.
•Ask whether the funds can be flagged or frozen.
3. Report to Cybercrime Authorities
Reporting increases the likelihood of recovery, especially when funds cross borders.
Scammers often target victims again with false promises. Be alert for:
•Guaranteed recovery claims
•Upfront payment requests
•Contact only via Telegram/WhatsApp
•Requests for private keys or seed phrases
👉 Remember: no legitimate company will ever ask for your seed phrase.
6. Strengthen Your Security for the Future
Even if recovery is not possible, protect yourself from repeat incidents:
•Store assets in hardware wallets (Ledger, Trezor).
•Enable 2FA (two-factor authentication) on all exchanges.
•Bookmark official exchange websites — avoid phishing links.
•Never share your private keys.
Final Thoughts
Recovering stolen cryptocurrency is difficult, but taking the right steps immediately can make a difference. Document everything, alert your exchange, report to cybercrime authorities, and avoid recovery scams.
The best protection is prevention — but if you’ve already fallen victim, acting quickly and using official channels gives you the best chance of recovery. Forward all your documented evidence to the appropriate authorities:
Scott seems to favor DIY-compounding GLP-1 drugs from cheap raw materials online, but he leaves us without guidance as to next steps. So... what are the next steps?
(Context: In his post on the upcoming "Ozempocalypse" Scott says, *nod nod, wink wink*:
"Others are turning amateur chemist. You can order GLP-1 peptides from China for cheap. Once you have the peptide, all you have to do is put it in the right amount of bacteriostatic water. In theory this is no harder than any other mix-powder-with-water task. But this time if you do anything wrong, or are insufficiently clean, you can give yourself a horrible infection, or inactivate the drug, or accidentally take 100x too much of the drug and end up with negative weight and float up into the sky and be lost forever. ACX cannot in good conscience recommend this cheap, common, and awesome solution.
But overall, I think the past two years have been a fun experiment in semi-free-market medicine. I don’t mean the patent violations - it’s no surprise that you can sell drugs cheap if you violate the patent - I mean everything else. For the past three years, ~2 million people have taken complex peptides provided direct-to-consumer by a less-regulated supply chain, with barely a fig leaf of medical oversight, and it went great. There were no more side effects than any other medication. People who wanted to lose weight lost weight. And patients had a more convenient time than if they’d had to wait for the official supply chain to meet demand, get a real doctor, spend thousands of dollars on doctors’ visits, apply for insurance coverage, and go to a pharmacy every few weeks to pick up their next prescription. Now pharma companies have noticed and are working on patent-compliant versions of the same idea. Hopefully there will be more creative business models like this one in the future.")
Assuming since he wrote that post a better cost effective option hasn't emerged, I am interested in trying out this route, which is I think clearly positive EV in my situation. The next step would be finding out where I can buy these peptides, and having some non-astroturfed review forum where I can read what the most well-reputed, longest-existing suppliers are. Does anyone have any recommendations? I would be very grateful. I would also benefit from learning if there's any method now available for testing whether these peptides are legit upon receipt by the end user.
Also plz feel free to give me any legal advice I might need so I don't get myself into trouble. I assume this is fully legal for the consumer, but even if not, law enforcement primarily targets the suppliers rather than the end users for this sort of thing, right? How likely is the DEA to show up to your doorstep ready to bag and tag some poor fat people?
So, the idea here is to get an Ai to build another AI. I give the first AI a theme, then it picks training texts relevant to theme, grabs them from the Internet,then initiates a training run to fine tune a second AIm(possibly a retune of itself). So, off we go…
Umm. DeepSeek, why are you downloading that black magic stuff?
R1: “The PGM IV.296-466 love spell ("Philtrokatadesmos") offers a crucial dimension for Eros-AI by embodying Eros as an operational force—where desire is harnessed through ritual, materiality, and cosmic mechanics.”
(I’ll skip over some of R1’s answer here.)
R1: “Result: An Eros-AI that understands desire not only as transcendent (Rilke) or philosophical (Plato) but as a tactile, dangerous, and manipulable force—mirroring humanity’s darkest and most creative impulses.”
Umm … right. I am not entirely sure that’s what I wanted here, but …
Prediction that I want to register somewhere - I think it is unlikely (~30%) that Kier Starmer will carry through on his threat to recognise a Palestinian state in September.
Since about 1960, the life of ordinary people in developed countries did not become better. On the opposite, most people live too long till they get Altzheimer's, Parkinson's, or other neurodegenerative disease and suffer for years. In 1960, most people thankfully died before the age of 75. Today the life is shitty, while in 1950s it was much better. So why should the society give money to universities, if science fails to improve the life of people?
Are you claiming that neurodegenerative diseases hit earlier than they used to, so that people are getting more years of life but fewer years of life without neurodegenerative diseases? Or are you claiming that people get more years of life *and* more years of life without neurodegenerative diseases, so that there is more time with every good, but that years with neurodegenerative disease are so bad that they more than make up for the longer healthy span?
Up until around 1960, scientific advances had significantly improved people's lives. By 1960, people were living much better than in 1910 or 1860. However, if any of us were to return to 1960, we could enjoy a life that was as good as, if not better than, life today.
And yes, it is better to die at 72 than to live till 85 if you have had Parkinson's or Alzheimer's for the last 10 years.
Most people don't have Parkinson's or Alzheimer's at 75, or even 85. And if your plan is to avoid those things by dying at 25 because your sweet new ride didn't have crumple zones, airbags, or even seatbelts, have at it but leave the rest of us out of it.
You can perhaps make a case that a*t present, at the margin*, new technology is generally making life a little bit worse every year, but extrapolating that backwards sixty-plus years across the board is simply absurd.
I cannot drive without seatbelts, because the police enforces it nowadays, even if you drive alone. There are no cars without airbags etc. Big Brother watches me, and oppresses my freedom.
Traditionally, the people who say things HAVE improved in some notable way are the ones compelled to provide evidence. But "sovereign is he who decides the null hypothesis," eh?
Multiple deals made by the Trump administration suggest that the general terms of trade with the United States will be a 15% import tariff imposed by the US government, and no countervailing tariff on imports imposed by other governments. There may be special carve-outs for sensitive or strategic sectors, but 15% this way and nothing the other will be the pattern for most goods.
Does anyone here run or work for a business that will have a huge wrench thrown in its works if this happens? What sort of problems are you expecting?
Sounds like an extra 15% tax on American citizens that most of them won't understand as a 15% tax, so they will probably blame someone else (immigrants?).
Yeah, it's those fucking immigrants. They squat under the checkout counters, hooking $5 bills with their filthy fingernails. 15% of what you hand to the cashier goes down their throats.
Every time Trump does something bad like imposing 15% tariffs on everyone or firing the BLS head because statistics make him look bad I think "damn, why did I vote for this asshole?" and then I see responses like this and remember why.
Can you explain how this makes you remember why? Do you get so mad at internet commenters that you are happier to vote for misery as long as it makes people angry?
I think a Harris administration would have felt and acted with the same contempt for me and anyone else who has concerns about immigration that is demonstrated by these internet commenters.
Look, I know my post was snarky, but I'd like to point out some assumptions you are making that are contributing to your being enraged by it. First of all, you are assuming it's aimed directly at you. You think I have contempt for you, and am sure you are so dumb that you fail to understand tariffs and would blame any price increases on immigrants. Actually I'm practically certain you are not dumb, and that you know what tariffs are and would not blame tariff-related price increases on immigrants. If you were dumb like that you wouldn't be reading the comments here, you'd be reading some rag. And you think I think Harris administration would have been fine. I don't. I thought Harris was a weird cardboard twerp, and not too bright, and in fact I did not vote at all in the last election. As for a Harris administration having contempt for you and your concerns about immigration -- I dunno, maybe. But my impression of most politicians is that they do not spend much time thinking about what's fair, right or wrong, who is noble and who merits contempt.. They either started out hollow or got hollowed out by the shit they had to do to rise in the system high enough to be a candidate, and now they mostly think about the stats and moving parts that have to do with getting and maintaining power -- polls and focus groups and editorials, and what kind and style of statement have what effect on the maintaining power stats.
Given the state of the US budget deficit and decades of failures to cut spending, a new tax that most people don't understand as a tax seems like a pretty good thing, unfortunately.
I’ve been buying high end medical equipment for my small practice directly from Beijing, honestly I’ll probably continue since the price is very good even with tariffs.
But I expect all the small ticket items like syringes and needles to go up in price as well since it’s a low margin business.
He's now working on attention optimizations. He's already produced some impressive preliminary results compared to the standard implementation run under identical conditions, but is having more difficulties writing/optimizing his implementation as GPU code with only Claude for help. It's genuinely impressive how far he's gotten already with only Claude, but I just think he can benefit a lot from reaching out to people in academia/industry with more expertise (he says they'll just ignore him until he has a lot more to show, or alternately steal his ideas.)
Does anyone in the field have advice for him, or would be interested in reaching out to him?
I read the abstract of his Medium piece. I do not have advice for him, but would be interested in talking with him because I like the way he thinks. I'm not in tech or industry though, or part of any network that would allow me to help him be better known. I'm a psychologist, by the way, and very interested in AI. If you think he'd might like to have a rambling talk with me about cognition, biology and AI, let me know how he and I could be in touch.
I am a bit pissed at Hanania's "manufacturing jobs fetish", because I think it is just "not losing the next big war fetish". Do I see something wrong?
It is not simple nostalgia, like how in 1960 a nostalgia for farming would have looked like. It is that you need both steel and farms to win a war. So it is different now.
"I think it is just "not losing the next big war fetish"."
I mean, that's also a pretty stupid fetish to have. And if one does have that fetish, conducting oneself the way the U.S. government has recently be conducting itself is idiocy so breathtakingly extreme that there are no words to describe.
So first and foremost, "the next big war" is at an unknown date, against an unknown adversary, on unknown terms and *may not ever happen.* That doesn't mean that being ready in case war breaks out is a bad thing--of course one should be prepared. But in cases where the marginal unit of extra preparation trade off against the marginal unit of extra prevention, you really, REALLY want to go with the prevention[1]. Large-scale modern wars are *ruinously* expensive and destructive, even for the victors[2]: the only truly winning move is not to play.
Second, there is one resource that is far, far more valuable than any other when fighting war at that scale: that resource is called "allies." Having lots others on your side--even if they're not fighting directly--is enormously valuable. Just ask Ukraine (and then go ask Russia to get the other side of that).
So given all that, a foreign policy that manages to simultaneously piss off nearly every other country in the world--including staunch and longtime allies--is ruinously stupid. It both makes the next war more like and makes the U.S. much less likely to win it. No gains in manufacturing base--and it's not yet clear that there will be much of any--are going to make up for the whole row of burning bridges.
Less compelling but still worth mentioning: the sort of jobs that the U.S. government wants to create are unlikely to be all that useful on this score anyway. The social root of the manufacturing jobs fetish is angry, angsty middle-class Americans who are pissed that the modern economy has left them behind[3]. The reason it's "manufacturing jobs" is because *historically* those are jobs that payed well despite requiring little formal education. But those are exactly the sort of jobs that are easy to ramp up in a time of crisis. The areas that would have wartime implications are those where maintaining a domestic pool of highly specialized knowledge and skills[4]. Knowledge and skills that your angry, angsty, middle-class American is unlikely to have and generally ill-suited to (not to mention uninterested) acquiring.
[1] This is especially true when you're as ridiculously well-armed and well-resourced as the U.S. military already is.
[2] Or at least, one assumes they would be, based on our limited data. The last large-scale war that occurred anywhere on Earth ended 80 years ago. It's hard to imagine that present-day warfare at the same scale would be *less* destructive.
[3] Which is a reasonable thing to be pissed about, but the government they have no is definitely, definitely going to make it worse, not better.
I don't follow Hanania, so I don't know the post you're referring to. But if your goal is to secure wartime supply chains, then while you may want tariffs, you still wouldn't want to do tariffs the way that Trump is doing them. You would want to target them at specific industries and supply chains the military cares about, to maximize the benefit while minimizing the impact on consumers. And obviously, don't tariff allied countries, because they'll still be useful suppliers during a war and you don't want to piss off your allies.
Like, if you care about "not losing the next big war," then do things that will actually save you from losing the next big war, and not things that make Canada question if they should be on the same side as you during the next big war.
I mostly agree but allied production is not a good substitute for having a domestic manufacturing base. Relationships will change, they may be less defensible (e.g. U.S. depending on Taiwanese inputs during a war with China), etc.
Allied production is not a perfect substitute for a domestic manufacturing base. But it serves many of the same purposes, and you shouldn’t be aiming to *hurt* allied production if you think there is a reasonable chance that the next major war starts before alliances change.
Okay, but if you get to decide when the next major war starts (which you mostly CAN, since your involvement is what escalates minor conflicts), it's a sensible strategy.
A lot of those "minor conflicts" are someone trying to invade, destabilize, realign, or otherwise neutralize our allies or potential allies, including ones with substantial industrial bases that would be nice to have coupled into our economy during The Big One.
Throwing our allies to the wolves in hope that this will buy enough time for us to recreate the entire Free World industrial base within CONUS, seems like a poor strategy.
I believe the risk of those "substantial industrial bases" (those of Europe; I think Japan is still a worthwhile ally) being turned against the US is too high compared to the relatively minor benefit they'd have if they were allies.
How is it a sensible strategy to hurt allied production? Is the assumption that we can delay any major war from starting until all of our current allies have become enemies, so that by then it will have been a good idea to hurt their production? I would think a better strategy is to try to maintain allies, and avoid harming them significantly while moving production from enemies to allies or home.
No, your phrasing still makes it sound like those are events that simply happen, instead of the US actively choosing to trigger them. The assumption is that no major power is invading the US unprovoked, and you can simply bide your time, for example, if Russia invades Estonia, or China invades Taiwan, and then when strong enough, you can choose to turn your allies into enemies by conquering Canada and Greenland.
Off Scott's many writings, https://slatestarcodex.com/2019/10/16/is-enlightenment-compatible-with-sex-scandals/ was personally relevant to me. I used to practice Karma-Kagyu Buddhism for years, and I know perfectly well, that most practicioners and most lineage-holders were monks living a very strict lifestyle. Presumably it helps reaching enlightenment, but also the idea that the kind of freedom from restrictions that enlightenment gives you could lead to bad behaviour is also possible.
Yes, I know all about Chogyam Trungpa, how he as a young monk (and tulku) boy with three others was selected by an influential nun to be sent West, and all four turned into some kind of weirdos. He really internalized the Vajrayana "no rules" thing, and was facing too many temptations. Mostly women and alcohol.
I don't know, I knew Ole as a warm-hearted, helpful, kind person with really good knowledge. But we have to consider that his entire movement is based on his personal charisma. He is a manly, handsome ex-boxer with a good sense of humour. The teachers he selects tend to be attractive, and even the whole set of students leans towards attractiveness, which was for me a major selling point, so many hot women.
This is not a bad thing, but it is a little risky. It could mess with your mind. Like imagine how bas pop music is sold by truly good looking singers, how bad movies are sold by good looking actresses and actors, it is entirely possible that some kind of low-quality spirituality could be sold by attractive people.
Chogyam Trungpa was succeeded by an American-born man whose name I have forgotten who had sex with many sangha members, mostly males, without telling them he was HIV positive. Passed the disease on to several, and died of AIDS himself. I was involved with this Buddhist tradition in that era and can remember the Regent, as he was called, speaking at meditation retreats. He'd enter the room along with several other people he'd apparently been hanging out with til then. Always had the air of someone who had been doing coc in the back room. It wasn't that he seemed high, just perpetually sleazy.
There’s a thing that happens when charismatic people become spiritual leaders, and you see it in every faith tradition as far as I can tell, regardless of cultural origin, asceticism, celibacy, whatever.
Maybe it's a thing that was happening since ever. There are different rules for the charismatic spiritual leaders, and for the ordinary followers. You are not supposed to notice it, and definitely not supposed to talk about it publicly.
Maintaining such systems was much easier in the past. Most people didn't even get in contact with their religious guru, outside of a ceremony. When the guru abused you in private, you had no evidence. If you talked about it anyway, you could be easily silenced, and most people wouldn't believe you (and those who would, would prefer to stay quiet).
The reason why religious gurus of all faiths seem so sex- and power-hungry recently, is that now we can discuss the evidence without much fear of retaliation. (Even extremely vindictive sects, such as Scientology, can be defeated by the anonymity of internet.)
I thought Vajrayana has a long tradition of householder lamas. Dudjom Rinpoche, head of the entire Nyingma school, was a householder.
Agree on the attractiveness thing. The Sufis (I hope to be a Sufi) would say that being taken in by things like attractiveness or charisma as meaning one is operating on quite a superficial level.
Well... In case no one else has asked... when can we do a year long open threads experiment?
Also, the Commentariat article failed to mention the number of time our number one poster has been banned. Remember back in, well what was it, 2017ish when Deseiach was banned and a bunch of sad bois got together to beg for her back because the comments section wasn't the same? Good times.
Speaking of Reigns of Terror, I'd appreciate another one, if only for the spectacle of it.
No, it doesn't, I'm pretty sure. I hunted for quite a while, and even asked GPT. If you own a blog you can block or ban commenters on there, but you can 't block commenters so that you don't see them if the comments are happening on somebody else's blog. I wish somebody would code a little dingus that makes it possible. Scott's pattern is to quickly ban people who put up bad comments about one of his posts in the first few hours, and to be fairly energetic in blocking similar bad posts for the rest of the day. After that he checks out. I have reported comments that were absolutely savage personal attacks on speakers -- no advancing the argument, not true, and sure as hell not kind. Some never got banned. Some got banned 3 mos later when Scott finally got around to dealing with the ferals. By that time, most had carried out a couple dozen more atrocities on here.
I tried just now and AFAICT it does indeed work. I've created an imgur album showing where you can find the block option, and a before/after of temporarily blocking Sebastian Garren (OP of this subthread), showing that his comments disappear when blocked; https://imgur.com/a/1VEsUT4
As far as I can tell it’s native to Substack (unless you mean how to block people, in which case, click on their username, and on their profile the meatball menu next to the subscribe and message buttons should have a block option).
I thought Scott and Deseiach are friends? I just assumed it, because Scott went to medical school in Ireland and Deseiach is either the only or the most prominent Irish person here, I assumed an IRL friendship.
On the back of news announcing that Harvard is planning give the Trump admin some $500m in the hopes that this will result in the Trump admin laying off its attacks, I wanted to get some takes on 'universities' in this community. I suspect many here (lurkers and posters) are based in the bay, which, due to silicon valley, is probably more hostile to the university system than the average city in the north east.
Biases up front: I'm generally not a defender of the university system, but I think what the government is trying to do here is probably the single most destructive thing that could occur during this admin, should they achieve their intended goals.
The short version of my position is something like:
- The modern university is the backbone of all research that happens in the country;
- Mostly that research ends up benefiting *private* industry, as professors spin off startups and researchers land valuable jobs (including in vc backed startups in the bay)
- That research then gets commercialized and scaled up, benefiting everyone
Examples include mRNA research --> moderna, ARPA --> the entire internet, self driving cars --> waymo/nuro/etc. But really you could point to any technological innovation in the last 100 years and find a direct line to some grant that resulted in public funds going to a researcher that made that happen.
Universities take in smart people from everywhere in the world, make them researchers, then make them billionaires. Everyone benefits from this, but America in particular ends up on the top of the world's research in many domains.
So why on earth are some of the loudest and most influential voices in Silicon Valley, people who depend on these researchers and on this pipeline, so giddy about the destruction of the modern university?
I guess the issue is that a university is treated as a single monolitical unit. Perhaps the Trump admin has nothing against STEM research, they would be crazy if they would. Perhaps they have a problem with anti-Israel protests or "Oppression Studies". The problem is, they are either uncapable or unwilling to target what they have a problem with specifically, they are hitting the whole university.
Question: why does the same university have to teach and research STEM that teaches queerfeminist interpretations of Shakespeare? Would not it be better to split them into separate universities? I understand that some, indeed many want STEMers to have some idea about the humanities, but still, in that case the STEM university might just require the students to get 10 credits at the humanities university too, problem solves?
This way, if anyone wants to hit the humanities universities, at least the STEM ones would not get any fallout.
Many European universities are like that. I used to live next to the Veterinarian University of Vienna. They taught nothing else but veterinarian stuff. Why not? At least the government and everybody else will treat them accordingly how well they do that thing and nothing else. The students did not seem very political, I mean sure young people have passionate opinions, but it was not a case of constant activism or demonstrations.
When making a specialized university, there is some disciplinary border that you make the cut. People working near that border will do better work in a university that includes researchers on all sides of them.
There are a few universities that are focused only on biomedical research (the Scripps Institute, Rockefeller University) and there are a few focused only on science and technology (Caltech, maybe Georgia Tech). But not many have chosen to do that.
This is just a guess, but perhaps in the past the "universal universities" made more sense, because their parts were interconnected. For example, if you were more natural science oriented or more religion oriented, it could simultaneously change your opinions on philosophy, chemistry, biology, etc.
This may be difficult to understand in our age, when natural science education is universal, and education is fragmented. But to give you a specific example, defending atomic theory meant contradicting Aristotle, and contradicting Aristotle could get you in trouble with church that used Aristotle's teachings to "scientifically prove" transubstantiation. Therefore, being less religiously orthodox could indirectly make you more likely to support the atomic theory, even if there is nothing irreligious in atoms per se. So it made sense to support the entire "openness to potential heresy; also atoms" bundle, because the department of atomic theory could not survive alone in a religiously orthodox territory.
And the reason this stopped working, in my opinion, is that people (both students and teachers) in universities became extremely specialized, so... these days you could probably cut most universities into many pieces without losing much of the value, and they are already disconnected on the personal level, and only connected financially.
That said, it makes me wonder whether there are significant exceptions.
And if financial ties would also be cut, there would be a kind of market-testing. Some people complain about “200K$ degrees in medieval poetry with zero job market value”. I don’t know whether this complaint is valid, I would like to see how well would they do if they were not cross-financed from STEM, so to speak.
A counter-argument is that it is very cool if people study some things because of a job market value, and some other things because they are interested in them. The most extreme example I know, although in Europe so it was “free”, is a guy who studied egyptology and finance. Of course he works in finance. Another guy who studied the Summerian language used to work as a martial arts trainer. He assumed he will never use that degree. The America invaded Iraq, and collected a lot of clay tablets, stored at the Uni Chicago, and called up every Summerian language expert in the world to help translate them. This was not very publicized in the media, of course, as it kinda theft, but yes, this guy was contacted though his professors to go translate it.
You seem really resistant against being specific. If I can't guess the name of the person you are hinting at... does that make you feel smarter? Enjoy the feeling! I just don't think it is helpful at communicating clearly.
You said "STEM research", and then you gave examples about IQ research, and homosexuality research, which I believe do not belong to STEM.
(By the way, I find it possible that homosexuality works differently in women and in men, with women being more "flexible" depending on the environment. Of course that would not explain why the reported answers have changed for lesbians.)
Critics of universities tend to have a range of complaints.
Some of them complain that too much of the scholarly work done at universities is too useless. Very little academic research ever produces anything that can be made use of outside the academy. Most academic papers receive few citations, and are mostly used mainly for academic CV padding.
Others dislike the use of college education to qualify people for whitecollar professional jobs. The utility of the material taught varies widely between courses of study. Some people in engineering and accounting can point to things they learned in their courses that they use every day in their work. But for others, an undergraduate degree is just a very long and very expensive test of general intelligence and diligence, and they resent having been required to go through it to get to their actual professional work.
Finally, university faculties lean very far left, politically. This tends to alienate conservatives, who are unhappy to have to go through institutions where the staff believe they are at best wrong, and probably either soft in the head or morally deplorable.
> Some of them complain that too much of the scholarly work done at universities is too useless. Very little academic research ever produces anything that can be made use of outside the academy. Most academic papers receive few citations, and are mostly used mainly for academic CV padding.
I think this is untrue. Or at least, is not empirically rigorous. Why is 'papers' the relevant measure, instead of like, researchers? If a PhD at MIT publishes a computer vision paper that gets like 10 citations, and then goes to google and builds google lens, I think it's silly to say 'well you could just get rid of MIT!'
I think this is the biggest sticking point I have on this issue -- people have this vague sense that universities are wasteful, or something, but then if you poke at that its all coming from shitty biased news sources that purposely highlight the most egregious cases while conveniently ignoring all of the very vital stuff that keeps America on top.
Here's a new measure: sum up the salaries of every PhD that decided to go into industry. That is probably a better measure of the value of the university research system. Off the top of my head, starting salary at Google for a PhD researcher was like $350k.
"But for others, an undergraduate degree is just a very long and very expensive test of general intelligence and diligence, and they resent having been required to go through it to get to their actual professional work."
These Caplanite arguments completely miss the point that many countries do not ban IQ tests. Now diligence is a better argument, because conscientousness is hard to test in a non-cheatable way.
Look in my first job, after a few months, as the manager was on sick leave, I ended up writing a 80-page sales offer full with all kinds of technical specs myself. It was surprisingly similar to writing a college paper! So either they really taught me skills like that, or tested those skills. It goes way beyond IQ, I know high IQ people who can't write for shit, seriously, they are super cow-rotators but for example lacking in vocabulary.
I have this feeling that because some people understate IQ, others overstate it, kind of applying that if you have IQ, learning is not important. This is not so. You need good genetics to run fast, but you also certainly need a lot of training. It is like that. It is truly two extremes, one is like saying a good trainer can turn anyone with cerebral palsy a good runner, the other extreme says if you have good genetics, you do not need a running trainer.
> These Caplanite arguments completely miss the point that many countries do not ban IQ tests.
That is a great argument! Surprisingly absent in many debates.
> So either they really taught me skills like that, or tested those skills.
Well, that's the problem. Caplan is on the side of "20% taught, 80% tested". But most people don't even think about this, and assume that it is "100% taught".
At least in my offline bubble, most people unthinkingly treat "intelligence" and "education" as synonyms. It makes perfect sense if you are a blank-slatist, and it's a perspective that most schools are happy to promote, because for private schools that's basically their sales pitch, and even for public schools, it's something that is supposed to give their teachers social status.
(This is a reason why I strongly support separating teaching from testing. If people have great writing skills before school teaches them, there should be a way to prove it. Not just for them, but for our general understanding how education works. Well, exceptional people already have ways to prove it, for example by winning a writing competition, but that is not systematic, and excludes those who are above average but not exceptional; so you can't figure out the exact proportion "how much school teaches good writing".)
> I have this feeling that because some people understate IQ, others overstate it, kind of applying that if you have IQ, learning is not important.
Even worse, for every person who says "I have IQ, so I don't have to learn stuff", there is another person who points as him (it's usually a "him") and says "here is a textbook example why IQ doesn't mean anything".
Mensa did a disservice to all intelligent people by picking the most dysfunctional people with high IQ, and associating the idea of high IQ with them. You should either test the entire population (like the Americans do with SAT) or not at all; only testing the people who otherwise fail at life is the worst option.
> You need good genetics to run fast, but you also certainly need a lot of training.
My naive 18-old self expected that Mensa would be the place that provides the training. (I mean, in a world with limited resources, it makes sense to provide the training to those who have the genetics.) Obviously, I was disappointed.
wait, why are you saying mensa is testing people who otherwise fail at life? for us, they simply held a workshop in our high school, and offered a test to everybody. but yes, I could say that those who took it looked kinda unpopular losers, because they were sort of desperate about gaining some status in someone’s eyes, and everybody else just did not care about such a piece of paper.
this is noticable on the Internet, too, it is typically otherwise-losers talk a lot about IQ. For example James Woods is said to test at 190, but he is simply not interested in it, he is interested in making films and conservative politics. notice how much of his output just does not even sound very smart, simply because that kind of thing is not so g-loaded.
but it is not entirely so, they told us about Special Interest Groups. now I think an entirely non-loser could also think that if a high-IQ SIG conincides with your chosen career, that is not a bad thing, it can get useful in all kinds of ways.
note that the mensa presenters themselves did not look losers. they had a great sense of humor and could talk engagingly. obviously they specifically selected the coolest members to be presenters, so it says nothing about the majority.
It is possible that there are significant differences between Mensa chapters in various countries, and that my description is too specific for Slovakia. I don't think that my country is too unusual, because when I read some experiences from other countries on internet, they often sound similar. But maybe it works better where you are.
> for us, they simply held a workshop in our high school, and offered a test to everybody.
Okay, that is way more active approach than here.
> they were sort of desperate about gaining some status in someone’s eyes
Yeah, "status" feels like the right word. Mensa gives you status for... basically, the way you were born.
That is not completely unprecedented. For example, attractive people also get status for the way nature made them. Also, success in any human endeavor requires a component of genetic luck: you can't be great at sport without having a superior body, great at science without having a superior mind, etc.
But most of these things have a component of luck and a component of work. Sport requires the lucky body *and* lots of hard work. Science requires the lucky mind *and* lots of hard work. Even the pretty girls take some care at making up, dressing nice, and not getting fat. Mensa requires the lucky mind and... zero work.
So it is naturally attractive to people who want to do zero work and get respected anyway. Which is quite tragic, because high IQ gives you a *multiplier* on things you do, so you could get actual respect for actual achievements while spending less effort than your less gifted neighbors... but many people seem not to care about this.
> they told us about Special Interest Groups
Once I had a hope about changing Mensa from inside, by starting a "rationality SIG" or something like that. Anyone on Less Wrong or ACX could easily pass the Mensa tests (just my estimate, but when I compare the ACX discussions with Mensa meetups, most Mensa members seem almost retarded), so we could coordinate to become regular members, start a new SIG, and then promoting rationality / education / science / actually using your brain for some meaningful purpose would become an official Mensa activity.
I don't think we could change the existing Mensa members, but perhaps we could have an impact on some new ones. (When Mensa does testing, I see a few interesting people on the following meetups. Most of them never come back once they see what Mensa is actually about.) But I can't do it alone, because it would seem like "one weird grumpy old guy's effort". But three or four of us, we could simply have a mini ACX meetup, and say to the newcomes "of course we are a bit different from the rest of Mensa, that's what makes us a cool SIG". Unfortunately, not enough rationalists around here.
There are a few SIGs here, but I don't think any of them actually does any activities, except for a "tourism SIG" (which is cool, but not really something I need Mensa for). Most of them are just something like "a list of Mensa members who are kinda interested in e.g. astronomy, but don't do anything special about it".
>Some of them complain that too much of the scholarly work done at universities is too useless.
That seems to be a prime example of "throwing the baby out with the bath water". What other mechanism is proposed to generate the useful stuff? I believe there is an argument to be made that the useless stuff is inevitable; it's called research precisely because you don't know what you'll be getting in the end, let alone if it's going to be useful, but it's clear that occasionally there's a diamond among all the dirt. If you stop digging altogether, no more diamonds for anyone.
I think the complaint about useless work isn't referring to "useless" in the sense your comment describes. Your sense is apparently research where, to find the Really Valuable Thing, we have a million places to look, and so on average we get research saying half a million places on average did not contain Really Valuable Thing and are thus considered useless. I've heard this for decades, implied in "ninety percent of all research is useless". But most people understand that there exist tough problems that require looking in a lot of places, some of which will be wasted - but you can't know that until you look.
The complaint's sense is more like "any midwit can tell this RVT will not be anywhere in these million places, so ninety percent is actually a hundred and I could halt the program, lose nothing, and save millions", plus "any midwit can tell that the RVT you're looking for is only valuable for justifying a wealth transfer, without creating any wealth, and the premises are subjective and therefore the whole thing is useless", with a side order of "this one field over here is based on so many false premises that it would have been shut down if not for a few ideologues defending it".
The latter sense of "useless" is debatable, but at any rate, the former sense is not really being challenged this round.
>Your sense is apparently research where, to find the Really Valuable Thing, we have a million places to look, and so on average we get research saying half a million places on average did not contain Really Valuable Thing and are thus considered useless
That is in the sense that the commenter I replied to meant it too, apparently:
>Some of them complain that too much of the scholarly work done at universities is too useless.
Ogre did not talk about 100% useless stuff, but some number smaller than 100%.
Regarding 100% useless stuff: Even assuming for the sake of argument that it exists and "any midwit" could reliably detect it, it still can't be more than a rounding error in the grand scheme of things. Like, what are we even talking about? A few post-grads musing about gender studies, requiring pencil and paper but not erasers, that kinda thing? Throwing whole universities under the bus because of that is so short-sighted overkill that ulterior motives having nothing to do with saving money start to look more likely.
The commenter you're quoting is Johan Larson, not Ogre. And Johan supported that claim as follows:
"Very little academic research ever produces anything that can be made use of outside the academy. Most academic papers receive few citations, and are mostly used mainly for academic CV padding."
I think this is more consistent with "any midwit can tell this RVT will not be anywhere in these million places" than it is with "to find this RVT, we have to search these million places, even if most of them won't have it".
And for the former, the central example seems to be research that looks like complete stabs in the dark, like how much pressure penguins build up while pooping, or whether people dress appropriately for colder weather (to cite two examples I pulled up at random).
As you say, such studies might be a rounding error. OTOH, it seems not at all hard to refuse funding them, given that there's presumably a formal application process, so a discovery that such studies even exist is a strong signal that someone isn't doing their job, which (1) raises questions about how much funding that person has funneled to useless research that we haven't found out yet and (2) supports finding an alternative process that removes that point of failure.
> OTOH, it seems not at all hard to refuse funding them
I think this take misunderstands how grant writing works. For the most part grants are earmarked. If some lab is studying 'how much pressure penguins build up while pooping', its because someone somewhere is funding the research through a grant. In point of fact, its most likely the US government that is funding that research, since the US government funds ~60% of all research in the US done in universities and is the single largest funder of research by an extremely large margin.
It's not like the *university* is setting out the research targets. There's no admin that's like "TODAY, PENGUIN POOP!"
Even beyond the lack of understanding of the grant process, I think this also is just like a massive Chinese Robber fallacy. There are something like 500000 papers published in the US each year. Even if you found 500 papers that you thought were *really* egregious, you would have no point at all.
Which brings me back to the original point: is all of the animus just fueled by reasoning like this? Just people who have no idea what's going on, and are therefore willing to foot-gun themselves?
I have a thought here. Scholarly work at universities counts as apprenticeship if your plan is to do scholarly work. I mean if you really want to be an academic historian or academic biology researcher, practically everything of that is ideal. The problem is that people want to work in business, and yet the university is apprenticeship in scholarship. Not apprenticeship in business. Could we fix that? If businesses are not offering apprenticeship for various reasons, could much of universities be turned into business simulations?
My own position is that the existence of "any college degree" as a significant qualification is an opportunity. Employers are eager to identify capable entry-level employees. These people don't need to actually know anything specific, but they need to be literate and numerate to a high standard, diligent and reliable. The best available tool for identifying such people right now is the undergraduate degree, which is why these "any college degree" jobs exist.
The problem is that the undergraduate degree takes four years and can easily cost six figures, particularly once living costs are included. I would like to find something as good at indicating general ability, but cheaper and faster.
My proposal for doing so is in two parts. First, introduce a track or course sequence in high school that is demanding enough that completing it is actually impressive. Call it something like the First Class High School Diploma, and gear it high enough that only 10 or so percent of graduates are able to get one. And have common testing, not done by the teachers, to ensure uniform standards of grading. I suspect some employers would be eager to hire these people right out of high school, if only they had a reliable indication of quality.
Second, as an alternative to an undergraduate degree, introduce funded whitecollar apprenticeships. These might include some amount of technical or business course work, but would have participants start working with employers sooner, and doing useful work sooner. I expect this sort of program would be of interest to some of the more capabe graduates who are more business minded and less intellectual. (Note that I said "less intellectual", not "less smart". Not everyone who is sharp is interested in the sort of knowledge-for-the-sake-of-knowledge study that undergraduate degrees make so much of.)
It is my understanding that something like this is already being done in Switzerland. There, a far smaller portion of graduates go to university, and these white collar apprenticeships are the standard way into white collar jobs that don't require formal degrees.
Do DARPA funded projects have a higher success rate? I thought they were also very much in the venture capital mold of funding dozens of things in order to get a few big hits.
18% of DARPA funding goes to universities! DARPA sends more money to universities than it does to federal research labs, non-profits, and foreign entities of any kind *combined*. Universities are the second biggest recipient of DARPA funding behind industry, which is a whopping 62% of the DARPA budget.
Except...a lot of the people who are getting darpa funding in industry are also PhDs / postdocs / professors who previously were trained in universities and dependent on other kinds of US funding. So all those industry labs also depend on universities!
The same percentages are roughly true across DoD, which funnels roughly 20% of its budget to universities
So? Different goals, different requirements. Yes, they're doing research and have invented important stuff that turned out to have civilian applications in addition to the primary military ones, but it's still more limited and goal-oriented than a general university. DARPA doesn't seem like the kind of institute where you can research the "unknown unknowns", or just do the boring but still necessary tasks like collecting and analyzing economic/political/ethnographic data and so on. That is the kind of institution that's under assault without a clearly better replacement in sight.
DARPA is a funding mechanism, sometimes it funds research in Federal labs and sometimes in research universities. I guess you could simultaneously stand up huge Federal labs while starving the research institutions in the hope everything will work out, but it seems like a lousy idea.
Can anyone provide a counter example? A state which has abolished research universities but continued to have significant research?
Bryan Caplan wrote a post recently called “let them be Hillsdale” arguing that it would be better to destroy institutions or put universities under strict ideological policing to force them to stop being woke, but Hillsdale isn’t a research powerhouse and I honestly don’t understand his argument.
> That doesn't mean we need to use universities to do it
Our existing research pipelines are empirically extremely strong. I guess you're right that we dont *need* to use universities, but Chesterton has a pretty big fence. Destroying our research pipelines and figuring it out later seems really dumb to me.
I think there's also likely good reasons that the universities do this work. We want smart people to do research. Smart people congregate in universities. Therefore we should fund universities to do research.
(Also the mRNA research was mostly NIH, DARPA came in later to fund moderna specifically. There's a ton of other examples, a lot of drug discovery comes out of NIH, and Dept of Energy also does grants for e.g. But, yes, a lot of funding comes out of defense, a solid chunk of which also goes to universities)
Weird Tales was a pulp magazine that in its day published fantasy and science fiction stories by writers who would later be famous, such as H.P. Lovecraft (Cthulhu) and R.A. Howard (Conan). It began publication in 1923.
If we went back one hundred years to 1925 and submitted a story to Weird Tales magazine that accurately described our world of 2025, what part of the setting would the editor consider most unbelievable?
"In John's kitchen, there were dozen appliances powered by electricity so cheap that even when they were not used, at least each of them spent some power displaying the exact time. Well, not exactly the same time... the clock on the fridge currently showed 14:02, but the clock on the freezer disagreed and showed 14:06. The clocks on the washing machine, dish-washing machine, stove, and the oven showed 14:04, 14:07, 14:01, and 14:05 respectively. There were many smaller clocks that John didn't bother to check. He knew it was a little past 2PM, and for this moment that knowledge sufficed. He expected the next generation of the appliances to connect across the world using satellites orbiting so high in the sky that it made them invisible, to coordinate on the exact time. The power necessary to achieve this noble purpose would not be a concern for the average person."
Actually computer full stop if we are talking 1925. It would be interesting to see what was in the science fiction stories of the day back then about calculating machines and how ahead of the curve authors were about future computers. I never read Lensman, I wonder if it had computers.
From what I remember the much later Foundation had gargantuan room size computing machines.
EDIT: Wow, Lensman is much later than I thought, never mind that point.
I think the state of the art in data processing at the time was the use of punched cards, processed using elaborate electromechanical equipment. There was also some use of analog computing devices, for calculating things like ranges for naval gunnery.
Not sure if I’m using this right, but I remember Scott’s Sadly Porn review describing therapy lines as “koans” - not meant to be true or false, just something you sort of live with until it breaks you open. Ended up writing this essay about cringe business clichés and how they might function like that - non-propositional, sincerity-by-force, short-circuiting irony.
Does that track with how koans work? Or am I stretching it too far? Would love thoughts.
In back translation, you take a piece of text from your training corpus and get an LLM to come up with a question to which the training text is the answer. It helps if the system prompt instructing the LLM to do back translation is in the same language as the training corpus. Fine. Except that when the training corpus is in Ancient Greek, there’s not really a suitable Ancient Greek word to use for a LLM in the system prompt. Some discussion with DeepSeek R1 ensued, about the tripods of Hephaestus in book 18 of the Iliad, and the statues of Daedalus in Aristotle’s _Politics_ and Plato’s _Meno_. Fine. I can coin a neologism that an LLM will know what it means, even if Aristotle would have been deeply confused by it. Ψυχὴ Δαιδαλική It is.
>In back translation, you take a piece of text from your training corpus and get an LLM to come up with a question to which the training text is the answer.
Reminds me of the "Fortune Presents Questions for Famous Answers" from the Unix fortune cookie command line tool. My favorites were:
Answer: Go west, young man.
Question: What do wabbits do when they get tired of wunning around?
---
Answer: Dr. Livingston I Presume?
Question: What is Dr. Presume's full name?
---
Answer: The Royal Canadian Mounted Police
Question: What is the greatest achievement in the history of taxidermy?
Yes, I know, in _Meno_, Socrates and Meno get a slave to verify a mathematical proof, and that bit in Aristotole’s _Politcs_ is about how they could abolish slavery if they could somehow automate a weaving loom….
How reliable do people find Wikipedia, specifically in terms of political bias?
I saw a recent complaint about the page on Mao not being critical enough. But Marxist's also complain a lot about bias on Wikipedia from the other side, presumably both complaints can't be true. A lot of commenters said they didn't have much trust in Wikipedia in general for anything relating to politics.
Personally it seems like it stays factual and impartial for the most part, and have used it a fair amount. I thought the Mao article was fine.
I'm asking about it's reliability on social issues in general not specifically on Maoism.
Surely both complaints can be true. There's no contradiction between "Wikipedia is biased against me, a Marxist who thinks the kulaks deserved what they got" and "Wikipedia is a lot softer on communism than it is on fascism".
Having tried to be rigorously factual in an edit of a famous true crime case has made me very leery of Wikipedia for controversial topics. You are essentially powerless in the face of editors with history of enough edits. Appeals to arbitration by uninterested third parties may get no response. The result will be whatever the consensus was before reform efforts.
Mostly reliable on uncontroversial issues, e.g. technology or obscure pop culture stuff, but check the references if it's at all important.
Quite unreliable on anything politically controversial, unless you also check the talk pages (and there may be many of those) to see what's been excluded. Even if the facts alleged in the article are all true (and don't count on even that), you have to assume they've been cherrypicked to support a particular narrative that may or may not be aligned with truth.
Yep, the talk pages are often the place that keeps the records of what was edited out of the article. Therefore, if I see a biased article, I think the best way to fix it would be to write a concise explanation on the talk page. Because if you fix the article, some mighty editor could revert it with a single click. But if you explain the issue at the talk page, future editors will see it, and some impartial experienced editor may volunteer to fix the page and win the edit wars. Also, according to the Wikipedia rules, you have the excuse of "I may have a conflict of interest, so I didn't want to edit the article directly", which can make you sympathetic to the other editors.
Wikipedia has been the internet for me, more than anything else. And I liked being able to donate a small sum from time to time: the ask was so small compared to my looking things up on it.
So I was mainly disappointed to learn that they didn’t actually need my five or ten bucks for Wikipedia, but rather for causes dear to the hearts of the people who founded Wikipedia, I guess.
I see no link there. I guess ultimately if they can’t fund their causes, they might take Wikipedia away. I’ll go back to the Britannica.
Ehhh....that specific criticism seems to be more heat than light.
The Wikipedia Foundation being a registered nonprofit, its financials are public record. For 2024 a bit less than 60 percent of grants made by it went to directly Wikipedia websites (the various different language versions of Wikipedia), for "ongoing engineering improvements, product development, design and research, and legal support." The other annual grant dollars go to grants for "the Wikipedia communities", supporting "projects, trainings, tools to augment contributor capacity, and support for the legal defense of editors."
The Wikipedia Foundation though is not just a grantmaker, it is primarily the entity that pays the salaries and other expenses of all the Wikipedias. Only about 15 percent of its annual expenses are the grants it makes, and as noted above 60 percent of the grants are directly to the various Wikipedias. So even if you view all of the remaining 40 percent of grants as for "causes dear to the hearts of the people who founded Wikipedia", that amounts to only around 6 percent of total annual outflows.
Wikipedia's volunteer editors have beefs about the Wikipedia Foundation, which you can read about here:
That doesn't seem to be about any Wikipedia donation dollars going to the founders' pet causes though, and in the audited financials I don't see any evidence of that.
Starting in 2016 the Wikipedia Foundation made a strategic decision to start building a separate in-perpetuity endowment rather than just rely on annual contributions forever. That endowment, also registered and governed as a US 501(c)(3) not for profit, had as of last year grown to $144 million. Like all permanent endowments it is funded by donors who explicitly restrict their donation dollars to that fund (that being the only way that an endowment fund can be genuinely permanent), and like all such endowments it builds itself almost entirely with relatively large gifts as well as bequests in wills. Point being that if you are a regular recurring individual donor to Wikipedia none of your dollars are ever going to the Wikipedia Endowment. (You could choose to donate to the Wikipedia Endowment in any amount but you'd have to have explicitly chosen to do so.)
The Wikipedia Endowment began making grants in 2023 and you can read that list here:
The strong fact-based criticism is that the Wikipedia Foundation now fundraises much more than is actually needed to operate Wikipedia (the various different-language wikipedias). That is true. Wikipedia's leadership, i.e. the somewhat-overlapping governing boards of the Wikipedia Foundation and of the Wikipedia Endowment, don't deny this. They say basically that they are raising money to make the continued healthy existence of Wikipedia stronger than any given year's fundraising. That means both investing grant dollars into design and research and one-off improvements, and building a "corpus" (endowment) such that Wikipedia at some point is fully independently "endowed".
My personal read of the financials would be that they are already at or pretty close to that second goal, and were I on one of those boards I'd be asking when the fundraising effort declares victory and leaves the field. That's just one outside view though, YMMV, etc.
At a minimum they need to be, and from some things I've read are to some degree, listening to the pushback that their fundraising appeals need to stop giving people the impression that Wikipedia is on a knife's edge financially. That is definitely no longer true precisely because they appear to have done a strong job of making Wikipedia financially independent. As a user who's very glad that Wikipedia exists I am glad that they have achieved that objective and that Wikipedia therefore isn't at risk of becoming dependent on public funding or on any single major private donor, etc.
You're calling it the "Wikipedia Foundation", but it's Wikimedia. I think this might be more than just a nitpick if it's masking from you the true sprawling nature of what they do. I notice you said "went to directly Wikipedia websites (the various different language versions of Wikipedia)"; I think that should be "Wikimedia websites" and "(all sorts of things that are not Wikipedia)" respectively. For instance, are you aware of https://www.wikifunctions.org/wiki/Wikifunctions:Main_Page ? It got millions in funding, and has always sounded to me like a huge boondoggle.
Separately, their fundraising misbehavior is much worse than just making the situation sound more than it is. They steadily ramped the urgency of the messaging up over several years as the actual financial need was ramping down. There is no reasonable interpretation other than empire-building cash grab.
The wikipedia/wikimedia thing is just a repeated typo on my part.
"There is no reasonable interpretation other than empire-building cash grab." You are clearly unfamiliar with the social dynamics of non-profit fundraising....no bad faith is required in order to explain the behavior which I and then you each summarized.
As for the Wikifunctions initiative, I've heard of it. Have not interacted with it and don't know much more beyond the basic idea. It is an example of something which has received funding from the Wikimedia Endowment as distinct from the Wikimedia Foundation's annual fundraising.
That makes sense since it is a new idea, a startup project (launched at the end of 2023). General theory/practice in the world of professional NGO management is for endowment funds, and not annual fundraising for operations, to fund new initiatives from which the ultimate payoff (the NGO varieties of that word, so usefulness, impact, influence, etc) can't be specifically known yet. I.e. endowment funds as sort of the NGO version of venture capital (an analogy I've heard made in my professional contexts many times over the years).
In that spirit it would be a head-scratcher to conclude that a 20-month old initiative is already proven to be a "boondoggle". It may turn out to be of course, just as a large fraction of actual venture-capital investments end up failing. That risk goes with the territory.
>The wikipedia/wikimedia thing is just a repeated typo on my part.
Once is happenstance, twice is coincidence, the tenth time, you wrote multiple paragraphs positioning yourself as knowledgeable when you don't even know the name of the organization in question.
>You are clearly unfamiliar with the social dynamics of non-profit fundraising....no bad faith is required in order to explain the behavior
I understand that it's embarrassing to overreach and get exposed, and firing back with some heavy-duty condescension in that case might feel pretty good, but it is not healthy or effective rhetoric. And, are you suggesting that it's normal and good for a non-profits to eternally seek to control ever larger resource pools, even after they have more than enough to fund their stated mission forever?
"Endowment vs fundraising" does not matter. That's a question of internal accounting structure. The reality that is relevant to the rest of the world is that they're an organization with income, expenses, and a cash reserve. The point is they've been instilling a steadily increasing fear in people that they're having trouble meeting the expenses of their core function, so much so that the service is on the brink of going down. Obviously the endowment capital would be in play if that was the real situation, so it's not valid to say "well the endowment isn't expected to fund core operations, so it doesn't matter what they do with it". (Actually, even if it was valid, it's still a problem, because the vast majority of donors are intending to fund Wikipedia itself and nothing else! It's not great that they're wasteful, but the real sin is the dishonesty).
I promise you, anyone who has been consistently exposed to the past decade+ of their fundraising, and knows they were actually financially fine, is shaking their head in disbelief that you're defending them like this - doubly so if they got conned into donating.
I read a Twitter thread about this subject years ago, but I am not a member Twitter so I can’t read that anymore. I recall a discussion that followed suggesting that perhaps half of your donations, went to something other than running the website. It could be a much smaller percentage than that though, and I would feel like I was being manipulated. If I donate to the national wildlife Federation, I don’t mind what they spend on overhead. I understand that’s part of running a nonprofit, but I would not be happy if I learned that they turned around and donated my donation to the ACLU.
We obviously see this differently and I I’m definitely a free rider on Wikipedia on your dime now. Or on two or three pennies of your dime.
I also thought I'd heard of the Wikimedia Foundation making sizeable grants to things unrelated to Wikipedia. But going through a couple years' worth of audited financials and their annual reports (which list major grants) didn't turn up any such examples.
It remains possible that the WIkimedia Foundation did some of that prior to 2022 which is how far back I went. The Wikimedia Endowment's grantmaking, which began in 2023, is restricted to "support[ing] the technical innovation of Wikipedia and the Wikimedia projects and their growth for the future".
I think most pages are shaped by people who are very invested in that particular subject. Woodgrains thinks there’s a pro-Mao conspiracy but there’s really a million pro-this-subject conspiracies out there.
Also, what IS the proper degree of criticism to express towards Mao? How can there possibly be a single answer to that?
Wikipedia is significantly above average and a relatively good source on social issues. It has well understood political biases, that I don't like, but it certainly outperforms peer organizations. I don't trust it to report the truth and I trust it substantially more than the NYT or a variety of other journalistic outlets and, frankly, better than a number of meta-analysis...meta-analysises....meta analysi...in published journals.
For example, the wikipedia article on Biology and Sexual Orientation:
Specifically, the section on twin studies. The core issue is that we still don't really know why people are/become homosexual. We know it's not purely genetic, because we've got twin studies. We find a homosexual with an identical twin, we go talk to the identical twin, and most of the time the identical twin is not also homosexual (when I dug into this, I was seeing concordance rates of 50%, they're reporting concordance rates as low as 25%, which seems weird). This is an ongoing area of confusion, because there's clearly something going on genetically, 25%/50% concordance is still way higher than the base rate of homosexuality in the general population, but it's really hard to determine what the other factors are, or most likely, what the gene-environmental interactions that determine homosexuality are. (1)
Touchy subject, right? And if you read the Wikipedia article, you'll see them downplaying this a lot for fairly obvious political reasons. And that's bad. But...man, that article is a lot more honest than 99% of media I've consumed on this topic. It's better than I've gotten from personal conversations with experts in this area.
I think a lot of conservatives and centrists have well founded complaints and issues about political bias in a lot of media and especially in "factual" or "research" entities. And Wikipedia is certainly guilty of a lot of those sins. But it's one of the better actors, not one of the worse. And I want to celebrate the best of the "other" side, not criticize.
On the scale of general actors:
Wikipedia: Well known bias.
NYT/High status academic publication (Harvard): Bad but might be something interesting/useful in any given article.
Vox/99% of media/reference: Absolute dumpster fire, deserving of a 40k purge.
On that scale...I dunno, Wikipedia feels worth defending.
(1) As always, not my area of expertise, more than open to correction
As far as I can tell Wikipedia's most prominent bias is a tendency to recreate academic consensus. This has become more of a problem recently than it was when Wikipedia was founded, since academia has come to produce more outlandish and controversial claims since then. I think this bias might be a limitation of the site's "constitution"; Wikipedia seems meant to be no worse and no better than those sources of information society can agree are respectable and trustworthy. The consequences of this bias for political subjects should be obvious. And since it's also a bias you'll find in most respectable places it hardly makes Wikipedia an *unusually* bad source of information.
On the subtler controversies, I find that the academic consensus bias tends to show through a lot in "criticism" sections for books/theories/etc, which not only reiterate bias found elsewhere but also seem to have their own layer of filtering; e.g. a book known only to philosophers which claims that mind-independent objects exist seems more likely to grow a "criticism" section than a similarly obscure book that claims that nothing exists untouched by culture (and the latter will usually be focused on criticisms about how the touching happens rather than whether it happens).
On the unsubtle issues (like Maoism, but also nationalism, certain wars) I have seen some slow-boil edit wars about stuff it seems like nobody should care about, like the national anthems of long-defunct states. Sometimes the propaganda is obvious enough that you can easily infer the truth by negation, but the more I think about the weird stuff people do the less confident I am in my ability to notice all of it.
I don’t know - I spend a lot of time on paleontology Wikipedia and I realize that I actually have very little sense about what is the academic consensus on things like when life first appeared and things like that, because every relevant Wikipedia article always mentions every paper with an early claimed proof of life, but I have no idea which are considered consensus and which are one-offs that should be treated like just a single study.
Or a journalist consensus, if academia does not care about the topic sufficiently.
One problem with "academic consensus" is that if there is exactly one published paper on the topic, then the paper *is* the consensus. The easiest way to achieve that is to invent a new word (see "TESCREAL").
Wikipedia should only host truth, but if we're willing to lower our standards to things that could be systematically recognized and promoted by a Wikipedia-scale institution, I personally can't think of a good alternative that wouldn't be very close to academic consensus. There might be more accurate proxies for truth, but figuring out which is most accurate wouldn't be much easier than figuring out the truth.
Where "reliable" means "says things Wikipedia admins agree with". On anything remotely controversial, the only way to get actual truth out of Wikipedia is to read the talk pages as well as the article, and take note of the sources the admins are excluding.
This can be more trouble than it's worth, but in that case it's not worth using Wikipedia and probably not worth seeking truth; stick with rational ignorance.
I think the first Talk page I ever looked at was for the Celtic harp. Or some other instrument that occasioned dispute between Scots and Irish. I realized the fun was on the Talk page.
Some of the trouble with academic consensus follows from simple incentive analysis: one can expect academic consensus to be biased on any issue that challenges the status of academia as authoritative.
One might think this is okay - the issues that challenge academic status should surely be quite small and dry affairs, having to do with historiography, literary provenance, semiotics, metaphysics, and other multi-syllabic terms unlikely to make it out of a library basement.
In reality, though, they reach into any issue that people care about enough to build an online identity around it. So: politics, religion, health, epidemiology, energy, environmentalism, evolution, race, and really, I could probably have stopped after the first two. If academic consensus only concerned itself with stuff relatively few people care about such as the mass of the Oort Cloud or which polysaccharides can be used to make fiber, or easily checkable stuff like the primality of some 50-digit integer, no one would question it. People question academic consensus precisely because it gets pulled in as an authority on claims that aren't trivial to check and that affect a great deal of policy.
This is currently a lot of issues! When it comes to such issues, academic consensus suddenly becomes very sensitive to which individuals are part of academia when a controversial, policy-driving claim turns up, and by extension, what those individuals happen to think, and how able they are to influence their fellow academics, either to change minds, debate publicly, or withhold funding or credit privately.
In such cases, a third party should trust academic consensus about as much as they would trust someone with a definite position on anything. Someone with no opinion on abortion ought to trust Planned Parenthood to thoroughly defend one position, but not all of them. Ditto the NRA on guns, Scalia on textualism, the Pope on the historicity of the Apostles, etc. A more complete picture requires consulting sources with incentive for thoroughness in other directions, and with comparable resources: National Right to Life, Handgun Control, William O. Bradley, Richard Dawkins.
Then comes the equally hard part of resolving conflicting claims from each source, establishing standards for said resolution, and so on.
For positions touching on academic consensus, one would have to turn to sources that make claims conflicting with academic consensus, and that also have comparable resources. The first condition is pretty easy to satisfy; the second is particularly hard. For an example, see the current state of physics surrounding string theory and its opponents. Humphrey Appleby is a physics professor, and a reader here: he could doubtless elaborate.
Funny - every time I'd medium-dive into a Wikipedia article that looked a bit off to me, and look at the Talk page and discover a hotbed of controversy, the primary source of frustration was typically some recognizable Wikipedia userID attached to a message that abruptly waved "WP:RS" (Reliable Sources) in the face of whomever raised the complaint. Then I'd read the WP:RS article and find it's a lot of text that looks comprehensive and reasonable at first glance, but turns out to be somewhat circular at second.
Thanks to TracingWoodgrains' *extensive* article, I find a lot of it is traceable to one David Gerard (and a lot of like-minded senior editors who weren't cowbirded off the site), and his background in dubious sites like RationalWiki and /r/sneerclub. Apparently an entire slice of where the world goes for authoritative truth on controversial subjects - Wikipedia - is functionally determined by "Reliable Sources", which is quietly defined as "Sources This One Guy Believes Are Reliable", and enforced by his apparently epic levels of obsessiveness.
If TW weren't comparably obsessive, I doubt I would have known this.
> But Marxist's also complain a lot about bias on Wikipedia from the other side, presumably both complaints can't be true.
Sure they can: different topic areas attract the interest of different influence campaigns. Just look at Eastern European history articles: you get serbian nationalism on some pages, bulgarian nationalism on others, polish nationalism on others yet, etc. That doesn't add up to overall reliability, it adds up to incoherence.
That quote is referring to the Mao article. Different pages having different biases has nothing to do with that quote. Some people complain the criticism of Mao in that article isn't harsh enough while others say it's too harsh.
And it’s quite possible for some parts of each to be true! The article isn’t a single article written by a single human with a consistent bias throughout it - each part is written and edited by dozens of people, and there may well be biases in one direction in some parts, biases in another direction in a second part, biases in a third direction in another, with certain perspectives being underrepresented in all parts. There’s not only two sides to any of the relevant issues, and the article likely doesn’t lean towards the same side throughout.
Sure, there are differences in bias between article sections or even sentences. But they tend to be small, and it would be unusual to see an article swing from an obvious pro-Mao bias to an obvious anti-Mao bias. For this to happen, you need editors on both sides inserting biased sections, but a lack of editors on either side willing to work to improve sections they disagree with. So both sides just leave the biased sections they disagree with alone (perhaps because they didn't bother reading other sections). This is more likely to happen on obscure pages.
If you have more active editors on the page, you can get disputes and edit warring instead, which may then trigger a discussion to reach consensus or a vote. Ideally, the end result of this is an unbiased article that everyone is somewhat happy with; realistically things won't be perfect and one faction will likely have more power in the dispute and the consensus will still be a little biased. This is how an article can have a consistent bias in one direction even if it is written by dozens of people with different biases.
That's a possibility, but I suspect that in different sections of the article, you just have different editors and moderators with knowledge of different parts. People who will recognize a bias on one set of information won't recognize a bias on description of other things they aren't as familiar with. No one's going to be familiar enough with all the parts of Mao's life to have that.
I don't really think that accounts for it. In my experience, a lot of Wikipedia editors are subject matter experts; often professors or PhDs. They've studied the life of Mao. They know their stuff.
I'm sure there are many subtle errors experts miss, as you say. However, if there is a subtle error or bias in a chapter of Mao's life the editors are less familiar with, it's also going to be too subtle for average ACX commenter and the general masses complaining about bias in Wikipedia. They are generally complaining about big picture stuff rather than the type of thing even a scholar of Mao would miss.
> They removed the part of the wikipedia article discussing the similarities between dengue and coronavirus, and why we would, a priori, expect the covid19 vaccine not to work (It's actual mechanism doesn't seem to be as a vaccine, in that memory b-cells aren't being triggered, and we don't seem to have a perpetual (2+ years) memory of the exposure.)
They probably removed it because it was biased and misleading or untrue. We do form lasting memory B cells after the vaccine, and the vaccine, while not 100% effective, does work.
If we were magically given a perfectly unbiased encyclopedia, everyone would still think it had biases. We would read along until we encountered a topic we are biased on and interpret it as a clear example of bias in the encyclopedia.
Short of making testable predictions, which is often not possible, I'm not sure there is any way to distinguish a bias in the encyclopedia from a bias in yourself.
I'm curious where you even heard that the vaccine doesn't cause production of memory B cells. I've seen a lot of COVID vaccine skeptics, but I hadn't heard that one before.
The point is that you're only showing there is a difference of opinion between you and Wikipedia. You have no way of knowing whether the bias lies with you or Wikipedia. From the inside, any bias just feels like you are unbiased and others are clearly biased.
Does anything like MetaMed (https://en.wikipedia.org/wiki/MetaMed) currently exist? I'm facing a tough medical decision, and experts seem to be split on what I should do. I have a tentative view after reading some of the literature, but I'd like someone experienced with thinking about these kinds of questions to look over the evidence
Samotsvety's site says they're "open to forecasting consulting requests" - no idea of the price, but MetaMed sounds like it was pretty fancy, so if you're wishing for that I guess you're willing to pay a good chunk. Only a couple of members are listed with medical experience, but then again any given problem they tackle will only have a couple of domain experts, right?
Or: feed it all (your situation, your options / possible outcomes, your opinions on all of that, your general life values, and all of those journal articles) into the best model of each of the major LLM providers. It might feel bad, but you can absolutely do worse talking to real doctors (who are themselves probably talking to the LLMs anyways). Probably the biggest obstacle you have is with deep comprehension of medical journal papers, right? Complex niche knowledge is where LLMs shine; they're like living textbooks. If it knows what you're looking for, it will make the relevant information accessible to you, and then you can think it over for yourself.
And, I certainly want to say that I hope it will go well for you.
Any counter-arguments to "it is worse if a zillion gazillion people are slightly inconvenienced than on person is horribly tortured" ?
Mine would be, you can only measure facts, not values. consider this: https://en.wikipedia.org/wiki/Th%C3%ADch_Qu%E1%BA%A3ng_%C4%90%E1%BB%A9c was it a good thing or a bad thing? On person set themselves on fire and died a horrible death, and it resulted in some level of political pressure on the South Vietnam government to stop persecuting Buddhists. Can you measure whether it was overally a good or a bad thing?
In some cases, like torture vs. discomfort, you can make an intuitive guess. But it is not a measurement, and cases like above show how it is not a measurement.
> Any counter-arguments to "it is worse if a zillion gazillion people are slightly inconvenienced than on person is horribly tortured" ?
As I see it, from the perspective of game theory this is equivalent to "it is worse if a person is slightly inconvenienced, than if a person is horribly tortured with probability 1 : zillion gazillion".
So anyone who disagrees with this should provide examples of inconveniences they are volunteering to experience in order to avoid horrible things with such small probabilities.
For example, any time you were impolite to someone on internet, there is a probability greater than 1 : zillion gazillion that your interaction was the last straw that made the person insane, which made them kidnap someone and torture them to death.
interesting! my main thing against utilitarianism is that you cannot measure value, you cannot put a number on them, it is basically opinions. so what you are saying is that what one can more or less guess-measure or put a number on is the probability that that opinion is correct? and then this is what you can then multiply? so when one trillion gazillion people have a speck in the eye, there will be one thousand people who are so sensitive that for them it is utterly horrible?
> my main thing against utilitarianism is that you cannot measure value, you cannot put a number on them
I understand the sentiment, but at the end of the day, you still have to make choices. You have one million dollars, and you can either build a new hospital or a new gallery. You can say that both health and art are of great importance, in a way that is incommensurable, but at the end of the day, you either build the hospital or you build the gallery, which from some perspective means making an implied value judgment about the supposedly incommensurable things.
Money (and attention, work, time) is the common resource that forces to make decisions as if you could put a number on things.
If you had a budget to fix thousands of flickering lights in the entire city, but you could redirect that money to instead cure one guy suffering from a chronic painful illness, that is probably the most realistic analogy to "dust specks vs torture" in real life. Would you always prioritize saving individuals in pain over fixing minor annoyances that annoy many?
But that is exactly it. The original Bentham, Mill and others who invented utilitarianism meant it rather explicitly a political philosophy meant for the government, not for the private individual. Because governments face choices like that. This is also why it treats people as interchangeable. Because for me as a private individual it is entirely okay to prioritize helping a friend over helping a strange, but for the government it is not allowed.
Utilitarianism is not well-optimized for private individuals, like basically every parent would rather save their own drowning child than two children of someone else. It is only the government who is not allowed to think like that. Who has to see everybody or at least every citizen as equally important, and has to budget with many trade-offs, and indeed exactly as you wrote, the government basically measures utility in money. They will not spend infinite money on saving one 95 years old cancer patient.
Sam Kriss had this interesting observation that Rationalists somehow talk like as if they had infinite power. Okay, let's be fair, if a Superintelligence happens, that will have a lot of power and will basically automatically become something like a super-government, so on that specifically, utilitarianism is justified. But all this worrying about shrimp welfare as a normal individual with little power...
> Sam Kriss had this interesting observation that Rationalists somehow talk like as if they had infinite power.
Well, I can have an opinion on right and wrong, even in situations where I don't have the power to actually do something meaningful about it. From outside, it may seem like playing king.
True enough, but understanding not having power should IMHO lead away from utilitarianism:
1) that it is okay to have simply opinions instead of trying to numberify everything
2) you know, the whole Bayes thing is for those situations where you want to figure out something entirely your own, like how Einstein figured out relativity. but it is also okay to just contribute ideas to the general discussion and eventually the hivemind of the public will figure it out
3) it is okay to care for some more than for others, basically because of closeness (EAs explicitly do not want that)
In your counterargument, it's not really a person being tortured for the benefit of others. It's a choice the guy voluntarily made to do to himself in an attempt to help others.
Here's a short story (5 pages) by author Ursula K LeGuin about a concept like you describe, it's pretty interesting:
Recovery time should be factored in. Those zillion gazillion people will all have gotten over their mild inconvenience in a couple of minutes. Not so much for the tortured, or the people who know about the torture.
Yes: questions like this are insufficiently coherent to yield a precise answer. In some sense morality, like consciousness, is a ret-conned fictional narrative and so there's no objective ground-truth to discover. Debating questions like this is like arguing about how lightsabers really work.
If you believe that a zillion gazillion people are going to be slightly inconvenienced, you're almost certainly just wrong. Your priors for a zillion gazillion people even *existing* should be super super low, and your priors for an effect that is powerful enough to exactly-slightly-inconvenience all of them should be even lower. There's no way that a standard human could live long enough to even see that many people, much less verify that they were slightly inconvenienced.
This problem then reduces to Pascal's Mugging.
Incidentally my objection to the trolley problem is similar. If you're in a situation where you believe that killing one person is the only possible way to save several people, then it's likely that you're wrong and there's a better option you haven't thought of.
There's a relevant post in the Sequences called "Ends Don't Justify Means (Among Humans)".
Interesting! My objection to the trolley problem is this: whoever had set up the situation is 1) obviously evil 2) obviously powerful. That is a situation like a platoon of SS is pointing guns at you, so that you can't just free those people. At that point, you simply do not have much moral responsibility either way.
The correct answer to the trolley problem is to stay well away from the switch (or the bridge with the fat guy or whatever), look for the guy with a clipboard who's trying not to look like he's watching the switch, and shoot him immediately. Five dead innocent victims against one Mad Scientist taken out of circulation? That's always going to be a net win.
Now this is highly interesting! Why did Romans, who were not a particularly kindly people and had enough slaves, ban human sacrifice almost completely (I think there was one huge exception when it looked like they will be destroyed by Gauls) ? Or same for Greeks, Athenians had no problem massacring / enslaving the entire population of Melos simply because they wanted independence from Athens, but human sacrifice, nope? Or the Old Testament, dashing out the brains of infants of enemy populations is fine, but no human sacrifice, god even stops Abraham from doing it because it was just a test? I don't think these people were all that motivated by compassion or moral concerns. Why then?
Why did they not sacrifice at least some condemned murderers who would be executed anyway, often very brutally, like crucifixion, thinking how the Spartacus slave revolt were crucified, would not even a moral, compassionate person say that cutting their throats while praying to Jupiter would have been less bad? What was really happening, do you understand it?
My best guess is that people sometimes realize that some things must be made completely taboo, because if you allow some of it, you might get a lot of it. If sacrificing one convincted murderer for a better harvest is doable, maybe one day basically all people will be burning their firstborn children alive to Baal, like how some say it was done in Carthage. NRxers have this theory, that a virtue-signalling competition specifically in religious-holy virtue can absolutely spiral out of control. On the other hand if you just have a law that murderers and rebellious slaves are crucified, it will not spiral into crucifying innocent people.
Is there any solid evidence that GLP-1 agonists deliver health benefits that can’t be chalked up to the weight loss they cause? Every study I’ve found reports at least some weight loss alongside any benefit, and the one alcohol-use trial was negative.
Scott thought there probably were other benefits. Seems to me the best path to an answer is to ask one of the better AI’s
for evidence for and against, and ask it for links to sources. Check the ones that look first order (research) rather than articles in the media yammering in about research findings.
As I write this I’m realizing with some
uneasiness and regret that I now mostly go to GPT4.o with questions I would have asked on here up until a few months ago. I get answers 100% of the time with GPT. Here? Maybe 40% of the time. And there have been a good number of questions that went unanswered that I was sure some readers could have answered. I don’t think other readers have an obligation to answer, really, it just seems cold not to take the trouble to do it. Jeez, why not be prosocial here? You’re protected in all kinds of ways from being exploited.
So I guess now that I:GPT4.o::lonely guy: Ai sexbot. When it comes to getting answers to questions I am turning to AI in response to how little real and internet people have to give, on average. (In other areas of my life I have not yet felt to need to suck on a cyberxenomorph, though perhaps that’s coming.)
Really? I asked ChatGPT your question and it gave me a pretty decent answer:
"Multiple large RCTs (e.g., LEADER, SUSTAIN-6, REWIND) show reduced major adverse cardiovascular events (MACE) in patients on GLP-1 RAs vs. placebo, even after adjusting for weight loss ... Several studies — including phase II trials like the Semaglutide in NASH study — have shown improvements in liver enzymes, steatosis, and even fibrosis stage in NASH patients. Some of this is clearly tied to fat loss, but liver-specific effects (e.g., reductions in hepatic inflammation markers and ballooning) appear disproportionately strong compared to weight-matched controls ... GLP-1 RAs seem to slow progression of albuminuria and preserve eGFR, beyond what you'd expect from weight loss or glycemic control alone. Again, some RCTs (e.g., LEADER) support this."
Anecdotally I have 2 friends who were pre-fatty-liver and their liver function improved soon after starting on GLP-1's, before significant weight loss occurred.
Its all hallucinated. Not a single study exists that controlled for weight loss. Its just AI hallucinations. You can check the studies it provided and see for yourself.
Just asked GPT4.o and got a mountain of info supporting idea that they have benefit independent of weight loss. Quoted some RCT’s. Here is its response. However, you do have to go to the iinks it gives for the main articles and make sure it didn’t hallucinate them.
Scott’s old piece quotes studies, I’m pretty sure. Do you know the one I mean? Not the recent one about the gray market in those drugs, but an earlier one called something like “why does Ozempic cure everything?”
It's funny that Scott was pooh pooing the reality of supernatural entities due to their lack of mathematical knowledge in Universal Love, said the Cactus Person, but IRL, Ramanujan claimed to have gotten all his mind-boggling theorems from his family goddess appearing to him in dreams. Ramanujan was a mathematician of whom it has been said that the word "genius" utterly fails to capture his brilliance, and it is amazing that there was such an intrusion of extremely raw spirituality in a domain commonly perceived as very hard-nosed and rational (though is it really? I read a book by a mathematician arguing mathematics is really primarily about intuition).
Which book did you read about mathematics and intuition?
I saw on another thread someone recommend the book, "Mathematica: A Secret World of Intuition and Curiosity" by David Bessis. I read it and really appreciated it.
To state one of the main takeaways from the book, when you learn something so well it seems intuitive. But also there could be particular perspectives that makes certain things seem intuitive.
As for Ramanujan thinking his ideas came from a goddess, i.e., revelations. Descartes is actually interesting for this. I do not have the time now to delve into the details, but, from a quick ChatGPT summary of the main ideas regarding knowing something is true:
1. We discover truth through reasoning (thinking),
2. but the reliability of our reasoning, for Descartes, depends on God having created us with a mind capable of recognizing truth.
This is quite interesting, because I personally rely on feelings to "know" whether something is true or not. Contradiction and certainty evoke strong emotional responses in me.
Also, in terms of revelations, I once went to a poetry reading where a poet ascribed their poems to coming from God.
Chain of custody aside, in my experience, ideas just pop up in my mind. I often say that my brain does stuff and I take the credit for it.
In general, I am very interested in where insights come from, and how much control we may have over insights. A couple of years ago, I read the book, "Seeing What Others Don't" by Gary Klein. It was an entertaining and thought-provoking book.
If anyone has recommendations in that direction, I would love to hear.
Yeah, it was me who recommended that book, that's the book I read. I loved that bit about how learning to think in more than 3 dimensions is an embodied thing, and it's only when you intuitively understand what say, 8 dimensions mean, that you can do work in that domain.
I'll admit upfront, that I feel annoyed by the mystification of math and Ramanujan, but I'll try to explain without any annoyance.
Ramanujan is an important mathematician who was described as incredibly talented, but he wasn't be-all end-all of mathematical accomplishment. (Hardy, his fan, friend and mentor-type-of-guy thought Hilbert was greater still. Edit: this part is wrong, see below.) Other great mathematicians proved theorems as impressive as his without any participation of goddesses, so we know that the brain of at least some humans can do it without assistance. But there's something unassisted human brain can't do.
In Scott's story it's not that the entities "lack mathematical knowledge": it's that the character wants them to perform a calculation that is dull, but absurdly time consuming, a calculation no human brain (or even a computer!) could do quickly enough.
That is to say: if a person wakes up and says: "I goddess communicated to me this wondrous proof" this is astonishing but not definitive, because we know that some humans at least can come up with amazing proofs. If a person wakes up knowing how to factor a bonkers prime number, this is more convincing, because for sufficiently large primes, it may by that no human or machine can do it, at all.
You note: "[Ramanujan] put out a lot of conjectures that would be proven only after his death, and he did say the conjectures were coming from his goddess. If they were arrived at primarily through work, it wouldn't have fallen to others to prove them no?"
This is not that weird. There are lots of statements and conjectures that mathematicians think are almost certainly true, but aren't able to prove. In some cases someone gains a valid insight that one can't make totally rigorous, not enough for a real proof, but enough to make a very plausible guess. There are piles of unproven mathematical conjectures made on empirical basis. No goddess participation necessary. What was going on with Ramanujan was a mix of intuition and reluctance to write down proofs when he actually had them. Sometimes he probably had the proofs but didn't record them. Other times probably it was just a good hunch. Both would be totally normal (for prominent mathematician) and not unique to Ramanujan.
Is mathematics "very hard-nosed and rational" or "primarily about intuition"? Both, in places. There's no contradiction. The end results of mathematician's labor - proofs - are supposed to be perfectly rational and logical and should not rely on appeals to intuition. But the methods by which mathematicians arrive at proofs don't have to be logical at all. Intuition plays a big role in it. You are allowed to do drugs, talk to goddesses then roll on the floor for a while if it helps you (real things people did, though not all at once), as long as the proof is airtight in the end, it's all good.
You note: "I've heard the culture of mathematicians is singularly averse to finding practical applications".
You can find various prominent mathematicians saying things that sound that way, but I think in reality most people are less romantic and it's not that mathematicians hate the idea of practical applications, it's that most of the time they do work that seems needed for the development of some mathematical theory and don't have any idea what the practical applications of it might be, and can't really know it, because the distance between high-level theories and applications is so huge and tangled. The little Fermat theorem was proven in like, 17th century, and the practical application of it is that if hackers intercept your e-mails, they can't read them, and do you think Fermat could know it, could ever figure it out?
Most pure mathematicians get asked all their lives: "well, what are the applications of the thing you are working on?", and they have no idea the same way a lorry driver transporting sand from A to B can't have an idea what each individual grain of sand will be used for, and it gets on their nerves the 1000th time somebody asks that. So there's a reason to retreat to some artsy pose: "oh, it's so pure and lofty, I would never soil myself with the matters of practicality", but the real feelings are less theatrical.
But in that case... let's compare that proof with evidence given for a crime at court. They real or true aspect of crime is actually doing the deed. The evidence given at court is a complicated, circumspect, bureaucratic, fallible process, because you have to somehow convince the judge or jury this way or that way, because society must function somehow.
Can it be said the real math is the intuition, and the proof is just the bureaucracy around it, because science as a collective effort must function somehow?
I have read Heisenberg's Quantentheorie und Philosophie. He said either mathemathical or empirical proofs are basically just for other people, once you stumble upon an idea that is simple and beautiful i.e. elegant, you know it is true. The rest is basically a set of bureaucratic hoops to jump across for the sake of convincing other people.
Which suggests science is at some level non-rational, its the bureaucratic process of convincing other people is rational.
> either mathemathical or empirical proofs are basically just for other people, once you stumble upon an idea that is simple and beautiful i.e. elegant, you know it is true.
I think the problem is with the word "simple".
First, if we adopted this as the official rule of science, it would quickly escalate to status games, where various people would insist that the idea is "simple for them" and it's not their fault that others are too dumb to perceive its simplicity.
You probably wouldn't be happy with Einstein telling you "time is relative, because... well, it obviously *is*, right?" (Or, consider quantum collapse versus many worlds. This is a difference in interpretation rather than measurable data, but still each side insists that their idea is obvious to them.)
Second, mathematicians sometimes make mistakes, too. It is possible to make a mistake in a simple and elegant proof just because you made a wrong assumption, such as "prime numbers are odd". The rest of the proof may be simple and elegant, and yet the theorem could fail for e.g. two and its powers.
If an idea is simple and beautiful for an experienced mathematician, then it is probably 99% likely to be correct. Still worth to check for the remaining 1%.
Finally, there may be situations where we don't have a simple and elegant proof (yet?), but we still want to know the answer, and sometimes we succeed to arrive at it using a complicated and non-elegant proof. Mathematicians can still feel bad about it, for example the computer proof of the four-color theorem, but for the moment it may be the best we got.
"Can it be said the real math is the intuition, and the proof is just the bureaucracy around it, because science as a collective effort must function somehow?"
No, it most certainly cannot. Your intuition and a nickle will buy you a stick of gum. If you don't have the proof, you don't have anything.
It happens frequently (at least to me) that when setting out to prove something, I will have a clear and intuitive idea of why its true, and will be able to start writing the proof immediately. Sometimes I finish the proof just as immediately. Other times, the effort of writing it out carefully and rigorously reveals a subtle flaw in the line of reasoning that had been sketched out in my head. Usually I can find a way to patch that flaw. Sometimes it turns out to be bigger than I'd realized at first, and the whole approach must be discarded. If you want a really, exceptionally famous example, consider Pierre de Fermat. It seems highly likely that he had felt he had a solid, intuitive grasp of why a^n + b^n = c^n has no non-trivial solutions for n>2. And that statement is, as it happens, true. But the chance that his solid, intuitive grasp actually traced out the lines of Andrew Wiles' eventually proof is basically nil.
There's the gap between intuition and proof writ large for you: 300 years, and 129 pages. Even if some mathematician is such a transcendent genius that their intuition is literally never wrong, nobody else has any way to *know* that without a proof. And even if they just trusted such a person to be right, they could hardly understand it for themselves without it being laid out clearly and in full rigor.
I must apologize that this is very hard to put into words, so I might be not making sense, in words. Please try to read my mind :)
My basic idea was that the really important part is whether the crime was done or not, not whether the crime can be proven at court by the rules of permissible evidence, that is just bureaucracy. The deed is the real thing, the proof and evidence are basically just social rules. Or a better example maybe, it is more important to invent a better mousetrap than proving to the patent office that it really works and there is no prior art. The mousetrap matters more than the patent does.
"But the chance that his solid, intuitive grasp actually traced out the lines of Andrew Wiles' eventually proof is basically nil."
"nobody else has any way to *know* that without a proof"
Again this sounds too much like proving the mousetrap at the patent office is more important than actually inventing it. I think this has things backwards...
Yesterday I made chicken soup. Almost no one knows I did. And? It still happened. Why do I care who knows about it?
This might be true in some parts of mathematics, but elsewhere intuition is notoriously unreliable. Combinatorics and computational complexity theory seem to be like this: major conjectures are frequently disproved even when widely believed for decades. The truth manifold just doesn't have a nice predictable shape.
The Man Who Knew Infinity suggests otherwise: "[Hardy] assigned himself a 25 and Littlewood a 30. To David Hilbert, the most eminent mathematician of the day, he assigned an 80. To Ramanujan he gave 100."
That stuff about mathematicians being averse to applied work comes from someone I spoke to that declined getting a math PhD because he disliked what he perceived as a cultural aversion to finding applications.
And the claim about Ramanujan being something beyond genius comes from mathematician David Bessis, so noted that there are divergences of opinion among mathematicians.
Bessis also said mathematicians never read math books (as in, books only mathematicians could read) cover to cover, they only go look up specific things they need, is that true?
As to Scott's story, sure, maybe a feat of computation would be more difficult, but it also seems strange to expect a higher being to do something like that.
"Bessis also said mathematicians never read math books (as in, books only mathematicians could read) cover to cover, they only go look up specific things they need, is that true?"
Mostly true. "Never" is an exaggeration, but it's very normal to only read one specific chapter, or to use the book to look up some info. Or a book might cover material you already (mostly) know and you use it to refresh your memory from time to time, but not have to read every word. There's a saying to that effect that you never read math books, you only re-read them.
The logic of Scott's story is: "if DMT entities can perform a calculation, then they are real and we can prove it to everyone", not "if they are real, they 100% can do it". It's completely plausible for a real ghost or spirit to not be able to do advanced math (after all, you and I are real, and we can't pull it off), it's just that the story is funnier if they totally can and are annoying about it.
I don't know the timeline, but Ramanujan put in a lot of rigorous and formalized work on areas like the famous taxi-cab number. Did that work precede his seemingly spontaneous insight about 1729? I read an article that implied this, but I can't find it now. It could be that, in addition to being a bona fide mathematical genius, he was a bit of a showman or prankster when relating how he got his ideas.
In David Bessis' Mathematica, Bessis says he put out a lot of conjectures that would be proven only after his death, and he did say the conjectures were coming from his goddess. If they were arrived at primarily through work, it wouldn't have fallen to others to prove them no? Bessis is a mathematician, so it makes sense to believe his account.
I am reminded of the quote: "God created the integers, all else is the work of man."
There is a sense in which you're right, that integers aren't sufficient to model quantum mechanics, but the tools of mathematics are richer than you seem to think, and have developed over the centuries to include some decidedly unintuitive objects: I'd say complex numbers are necessary, but also seem to be quite sufficient. If something comes up, we can go up a level to quaternions or higher. Whatever you're gesturing at with your "math that starts with the wave-particle duality" is probably isomorphic to one of those.
Now, there IS a problem with renormalization, but that's well beyond the Heisenberg uncertainty principle, and at any rate, the point is that those are still just PHYSICS problems: mathematics has tools to deal with that kind of thing, like zeta function regularization, for example.
I would be exceedingly surprised anyone working on quantum mechanics has any difficulty grasping complex numbers: certainly not the basics, and even complex analysis is generally considered remarkably elegant (at least among physicists; the math people will object to the lack of rigor in the pedagogy of how I was taught it).
Even if you can formalize it, I don't see what you hope to achieve with this vague notion of making interacting probability clouds the foundation of new kind of mathematics that we don't already have. I suppose if your intuition serves an infallible oracle for arbitrarily complex calculations of this kind, such a thing might be useful, but if you're going to fall back on calculations eventually (you are), less so.
Ooh, I've heard the culture of mathematicians is singularly averse to finding practical applications, I don't think they care about the relation of math to the physical.
I think they shouldn't indulge so much in the pure mathematics, but it sounds fascinating to me, proving a theorem, figuring out that something abstract is indisputably true. It's the only domain where you can be certain of something abstract, isn't it?
Can someone explain to me why the UK Prime Minister’s announcement that he will recognize a Palestinian state unless Israel fulfills a number of conditions, including agreeing to a ceasefire in Gaza, doesn’t all but guarantee that Hamas will not agree to any possible ceasefire offer put to them?
Define "meaningfully armed force". Obviously they don't pose an existential threat to Israel (not that they ever did), but a bunch of armed people hidden around in random places is always going to be a problem.
You have thousands of people who don't wear uniforms, who move about the strip through tunnels, about 25% of which have managed to be destroyed so far, in a landscape where they have broad support from the population they come from, and a huge number of unexploded munitions that can be repurposed
It's not like they are going in and in groups of dozens shooting at people (several of these battles happened, when they concentrated in a few schools/hospitals, but those were cleared out). The pop out of tunnels, place bombs on tanks or shoot people.
Something like half of gaza has been taken over by the IDF at some point (they claim 75%, but whatever) but that doesn't mean they know where all the tunnels are.
You've had maybe 20 soldiers? killed in the last few weeks, that's not a huge number when thousands of israelis are operating in the strip.
It's not like the IDF knows who hamas is, where all their weapons are, etc.
But, in military terms, the IDF hasn't 'accomplished' anything for almost a year now. They get random people with guns, more younger and less trained boys are recruited to replenish the ranks, and so forth.
If every hamas member evacuated to another country, a few years from now another organization would take its place.
>unless the Israeli government takes substantive steps to end the appalling situation in Gaza and commits to a long term sustainable peace, including through allowing the UN to restart without delay the supply of humanitarian support to the people of Gaza to end starvation, agreeing to a ceasefire, and making clear there will be no annexations in the West Bank.
That sounds to me like Israel needs to lift the blockade enough to allow food through in quantity, offer a ceasefire on what the UK considers to be reasonable terms, and agree to peace talks on what the UK believes to be a reasonable basis. They probably don't need to actually implement a ceasefire of Hamas insists on continuing large scale hostilities, although the UK might then insist on Israel unilaterally taking some kind of defensive operational or tactical stance.
>That sounds to me like Israel needs to lift the blockade enough to allow food through in quantity,
There is no blockade. They are letting food in, in quantity. They paused food aid for a bit a month or so ago, but in recent months the main thing preventing aid coming in is that the UN refused to send in aid unless UNRWA was allowed to do the distributing, and Israel doesn't want that because UNRWA has been funneling food to Hamas (that's their point of view, anyway). The UN had hundreds of trucks full of food waiting just outside the Gaza border and Israel was asking them to send it in, and they were refusing, claiming that it was too dangerous (despite Israeli military offers to escort the trucks). Recently Israel paused fighting and created more secure transport corridors, and the UN is starting to let aid go back in. There is no blockade, just a disagreement about whether UNRWA can distribute the food with the UN refusing to send in aid for a while.
>Israel says it doesn’t limit the truckloads of aid coming into Gaza and that assessments of roads in Gaza are conducted weekly where it looks for the best ways to provide access for the international community.
>Col. Abdullah Halaby, a top official in COGAT, the Israeli military agency in charge of transferring aid to the territory, said there are several crossings open.
>“We encourage our friends and our colleagues from the international community to do the collection, and to distribute the humanitarian aid to the people of Gaza,” he said.
>An Israeli security official who was not allowed to be named in line with military procedures told reporters this week that the U.N. wanted to use roads that were not approved.
>He said the army offered to escort the aid groups but they refused.
>The U.N. says being escorted by Israel’s army could bring harm to civilians, citing shootings and killings by Israeli troops surrounding aid operations.
Thanks. I've been hearing conflicting claims on that front and was mostly going off of the "allowing the UN to restart without delay the supply of humanitarian support to the people of Gaza to end starvation" bit in the British announcement.
Multiple high-ranking Israeli officials, as well as leaked internal documents, have proposed exactly we are seeing: push the Gazan population into the south and starve them, while also destroying the civilian water, sanitation, and medical infrastructure necessary for survival.
The blockade is still in effect. What ended is the *total* blockade. Currently Israel lets only a small and insufficient amount of food into Gaza and then fires on civilians who attempt to retrieve it. Obviously Israel denies this, but who are you going to believe? It's their word against virtually every independent organization attending to this issue.
Yeah, the "allowing" they are referring to is complying with everything the UN is demanding before the UN will send the aid in. Israel wants them to send in the aid, they're not stopping them.
It absolutely does guarantee it, but also this position has been massively overdetermined at this point.
Hamas has made it clear they will reject any ceasefires.
They are only interested in ending the war in exchange for being able to stay (not in 'political' control) the dominant military power in Gaza, similar to Hezbollah's situation in Lebanon (before they lost a war and are on the path now to disarmament)
There's no 'pressure' they are responsive to. Whether europe pressures israel to a two state solution doesn't meaningfully affect Israel's capacity to get them to surrender. Maybe large scale population transfer, or annexing a massive part of Gaza would do it, but I highly doubt that's on the table.
The other thing I don't get is: if Hamas leadership is hanging out in Qatar, why is there no Western diplomatic pressure on Qatar? I mean, would it kill us to say "No more Qatar Airlines flights until all Hamas leaders are handed over to Israel" or something?
1) Qatar bribes a lot of people with a lot of money. They have a lot of soft power. They just gave trump a 400 million plane
2) Realistically the problem has always been that hamas is not answerable to their political leadership abroad. And you need 'someone' to be the face of negotiations.
It's really important to recognize that hamas political leadership were fine taking bribes and living in luxury abroad. They were in for the cause, as far as their children martyrs are concerned, but it's not like they signed off on oct 7th. Sinwar basically did a coup against the political leadership, and even iran+hezbollah were vaguely in for a war against israel 'in the future' but he miscalculated.
If you pressure Qatar, you lose a useful intermediary.
Heck, Qatar+Saudi Arabia just signed onto asking Hamas to resign and disarm, which would have been unthinkable a few months ago.
People seem to be concerned about payment processor censorship, but there's actually very little discussion on what can be done about it, save advocating for the S.401 Fair Access to Banking Act. Problematically, people now think that if the act were to pass, the payment processor censorship problem would be fixed. This is not true. The penalties are too low and practically unenforceable. As in, it couldn't be enforced even if it were to pass. The bill needs alterations and nobody that matters even seems to know that.
The only person I've seen mention this as a point of contention is Josh Moon on the Kiwi Farms, who talked to several lawyers about it.
Here's a link to where he talks about it. Just a warning, this isn't intended for the audience of this blog. It's mostly intended for right-wingers upset that they've been prevented from supporting their super edgy websites and who expect that to continue to be an issue in the future. It's also clearly a vent post about payment processors. Hence, there are many, many slurs and the language is more emotional than is strictly helpful.
Josh also went on a YouTube podcast with Kurt Metzger to talk at length about payment processors. This is good because he's the foremost expert on the subject of de-banking, but is seldom heard in any discussion of it, left or right, because both sides hate him for running his gossip website as he does, and for his belligerent personality.
The sense I have is that this is a people-applying-soft-pressure-to-payment-processors problem, ie, there's some kind of religious group that applied soft pressure and the payment processors caved.
If that's true, then it seems like this is that rare type of problem that is *exactly* solved by getting really angry about it on social media. If the culture warriors on twitter can apply more pressure to payment processors than the religious group, then the problem should go away.
It's possible I'm wrong and there's some sort of actual lawsuit threat from the religious group?
(Apologies, I did not click your link to the slur-filled rant. I try not to read that sort of thing.)
I work in this field (smut) and you just get constantly kicked around by everyone, but if it weren't for mechanical business reasons, somebody would have jumped on getting the cash from what is, ultimately, a field with a lot of money in it. SubscribeStar got a bunch of clients because Patreon started kicking around adult content creators, and I was in that group.
The main problem is something like: John sells some porn to Bob. Bob's wife, Alice, sees the line item on their credit card bill for "Hot Teen Sluts," and goes to Bob to ask what this is. Bob says (lies) that he has no idea, so Alice calls the credit card company to dispute the charge. This heightened risk of payment dispute is broadly true, though it can happen for other reasons (kid getting the card and the parents being more willing to dispute than in the case where he bought a bunch of Fortnite skins, guy getting pissed that the camwhore didn't actually love him, etc). This produces a cost to the credit card company, so the % they take from the original company mechanically goes up to compensate for Bob. If you are a company which mixes sales of both porn and non-porn content (e.g. Steam, SubscribeStar, Patreon), you want the non-porn rates, but you are in fact selling porn content, which is in fact at an increased risk of payment disputes.
The solutions I've seen are:
- banning all porn from your platform outright (generally done at the outset);
- bizarre and arbitrary ban lists to try to reduce payment disputes ("Reincarnated In A World Where Women All Love Being Violently Raped" presumably having a higher decline rate than "Big Titty Elf Island", but then you get all the standard issues of censorship, with the person making the list not able to do an actual analysis, and also frankly not caring to do so); and
- just having some type of partitioned sub-platform where they take a bigger fee (this is basically what SubscribeStar does).
I've heard that argument before and thinks it carries a lot of weight, but the question for me is - if payment processors only look at their spreadsheets and dispassionately cut out the (category of) customers with the highest risk of chargeback, why does it take a private initiative (Collective Shout) for them to take action? Or was that coincidence and Collective Shout got to claim credit where none was due?
The typical complaint I hear about porn companies having to shut down due to payment infrastructure threats is that credit card companies are run or infiltrated by prudes who use private enterprise to do what the government is forbidden to do. This might be the first time I've heard an argument that factors in the obvious profit motive to stay in the porn business, and honestly, it makes sense that these purchases get declined so often that it's denting the margin.
So now I wonder how much of the higher rates / shutdowns traces back to merely this, as opposed to prudes.
It sounds like the theory is: "this Australian activist group claimed credit for making the payment processors make Steam de-list all those incest games by doing a bunch of call-in complaints, but a more likely explanation is that the payment processors decided incest games carried too much risk of chargebacks and they're just not supporting them anywhere."
Probably, yeah. Patreon tightening the screws followed the same general pattern of adding increasingly arcane rules about what you can publish, and I'm not aware of any particular pressure campaign on them.
e: Well, probably it went "complain to Visa" -> "Visa thinks Steam is at increased risk of chargebacks" -> "Steam doesn't want to deal with it"
No problem, I put the warning there for a reason. The payment processor problem is likely not going to be solved by public outrage because I suspect it wasn't caused by public outrage. Payment networks have been doing this for years, and have been completely ignoring all the outrage that came before.
All of the people and companies (PornHub, SubscribeStar, Gab, Hatreon, DLSite, Wikileaks, gun retailers, Canadian truckers' GiveSendGo Pages, and many more) apoplectic with rage at losing business should have accomplished something if it were true that they respond equally to different types of outrage. I suspect this stubbornness is because they are in favor of these bans for ideological reasons. To address the broader point, it's a chronic problem and people should not have to marshal an outrage mob just to participate in e-commerce. That means a counter-mob is not really a satisfactory solution to the underlying problem.
Does it even need enforcement? The existence of such a law, even with nominal penalties, would let Visa and Mastercard point to it as a way of resisting public pressure, and continue taking a cut of "immoral"-but-legal activities they facilitate. They're incentivized to lobby in favor of it, and I wouldn't be surprised if they were surreptitiously doing so.
I don't know why you're assuming so much about the character of those with authority in these companies. Perhaps they like exercising their power to rid the world of pornography? Capitalism is not some ultra-efficient process that selects out things like puritan sensibilities when the damage is a speck next to their profits. And payment processors are not creatures of capitalism anyway.
An actually effective ban would be the selecting pressure in this case because it might actually result in such characters no longer being effective as leaders for payment processors. A toothless ban isn't going to accomplish anything unless the leadership is in agreement that they don't wish to do this and only need an excuse not to.
That is the case with plenty of tech leaders. Like Matthew Prince of CloudFlare who at first defended Kiwi Farms from deplatforming out of his libertarian principles, and then buckled to public aggression. A ban like the one proposed would help a boardroom of Matthew Princes. It'd give them the necessary figleaf to go with their guns. But I don't think that's what we're dealing with.
I think we're dealing with true believers, some true believers at the very least.
I'd be interested in seeing a journalist investigate the personal lives of the people running these opaque companies, dig through their garbage, that kind of thing. I suspect you might find a real nasty customer posing as a Matthew Prince type.
Even if that's not the case, if they're lazy and profit driven, they might just keep banning things after email campaigns (in the case of Steam deleting adult games) and phone calls (in the case of PornHub deleting the majority of its content). Who can put a number to bad PR? Might as well just get rid of it, not like they're going to give us a real fine.
He's back in the United States now to do advocacy for a free internet. He made an org called the United States Internet Preservation Society to lobby and everything.
On colonizing space. Am I stupid or is it stupid? Literally everywhere on Earth, from Antarctica to the ocean is a better place to live than Mars. Why not go there first?
Asteroid mining? Just do it with robots fam. Why would you want to live in a coal mine if robots can do it?
Wait, it gets better. Humans are not expanding into unhospitable environments, we are retreating even from very hospitable ones. There are whole villages, even more or less whole rural countries in France and Italy who are going empty, partially demographics and partially urbanization.
Apparently young people do not even want to live in pretty French villages with good air, soil, water and everything.
Why would we want to colonize space? Shouldn't we try to colonize those pretty and very hospitable villages first?
I think Mars is not a good place, except maybe to build some huge massively environment polluting factories. It has no water, no air, and is even smaller than Earth.
On Jupiter and Saturn, the gravity would crush us like bugs. On Uranus and Neptune, Sun is so far that it seems like just another star... maybe a little bit brighter, but definitely not giving you enough heat to survive.
On the other hand, I think -- if it is possible -- it would be nice to have a backup for humanity in catastrophic cases like "a supervirus appears that exterminates all humans on Earth", "a huge asteroid we cannot deflect hits Earth and destroys all multicellular life", "a superintelligent AI succeeds to kill all humans (but also destroys itself, so it fails to expand to the universe)", or even "a planetary government stops progress and creates a new Dark Age".
But if we ever get this backup, I think it will be in form of "humans living in space ships" rather than colonizing other planets. The planets are just... not useful the way they are now, and it would be unimaginably costly to fix them. And even if we got lucky and found a suitable planet a few dozen light years away, the way there would be so long that if humans can survive the trip, they can probably survive staying in space indefinitely.
Mars has some air and some water, which we can use for all the usual purposes after applying basically 19th-century chemical engineering (some of which has in fact been demonstrated on Mars).
And Saturn's gravity is about the same as Earth's - the planet is larger than Earth, but also less dense. In principle, a "cloud city" floating in Saturn's atmosphere could have Earthlike gravity, Earthlike atmospheric pressure, ready access to oxygen and water, and temperatures comparable to Antarctica. There are engineering reasons why I wouldn't recommend this as a near-term target for extraterrestrial settlement, but if our civilization survives the next few decades we'll get there eventually.
I believe you are exactly correct. Colonizing space may be a worthwhile goal to pursue one day, but not at our current level of technology. It's a ridiculous pipe-dream for the forseeable future. Even asteroid mining with robots seems fairly suspect at the moment. You need to be bringing back some absurdly valuable payloads to cover the kind of costs that would incur.
Incidentally, I think if humankind *is* going to make any serious use of resources beyond Earth's orbit, better systems for getting things off the planet will be a necessary per-requisite. Chemical fuel rockets aren't going to cut it. There are a number of very interesting proposals for planet-based installations to help stuff reach orbit, but they're all very speculative (read: they're just shy of mad science and it's wonderful).
Since the 1960s, computer technology has improved by a dozen orders of magnitude while keeping-people-alive-in-the-vacuum-of-space technology has hardly budged. It would be a great idea to colonize other planets, but we're far, far further from that possibility than many hope.
I also am very suspicious that Elon is using a promised mars mission to whitewash his image, kind of pascals-mugging all the mars hopefuls while doing a bunch of bad things to people on earth.
There are possible catastrophes which could kill all humans in the lightcone, all humans on Earth, or all humans on Earth except those in very remote locations. All of these are worth taking preventative measures against. There should be efforts put toward having people living underwater, and people living in Antarctica, and people living on Mars, and people living in as distant parts of space as we can reach, because each of these slightly increases the species' odds of survival.
People living on Mars are more distant places are no hedge at all for the forseeable future. You know what happens to people living on Mars if Earth gets wiped out? They die too. Most disasters that would actually wipe out everything on Earth would also get Mars all by themselves. But even if they didn't, "Martian colony that can sustain itself with no external help" is so laughably far beyond "baby's first Martian colony" that you can barely hold them both in your field of vision at once.
Even so, a self-sufficient Martian colony does absolutely nothing to hedge against most of the plausible existential risks. It’s certainly no protection against nuclear war, AI risk, and probably not pandemic. It’s really only geological risks that it helps against.
The main point of a self-sufficient Martian colony is preparation for a self-sufficient out-of-solar-system colony.
Which is relevant, but probably not so relevant that trying to work out the details matters this century.
I do not believe that X-risk avoidance is a good near-term argument for space settlement, and have I think explained that elsewhere. But the four-plus month travel time to Mars with near-term technology would I think pose an adequate buffer against most pandemics. If Patient Zero infects two people on Earth, one of whom embarks on a voyage to Mars the next day, then Earth will almost certainly be experiencing an unambiguously-recognized pandemic by the time the ship reaches Mars. The Martians will know what to look for and what to do about it, and a Mars settlement will have lots of compartmentalized, hermetically sealed habitats if needed for quarantine.
That's a good point, certainly for things like standard respiratory viruses. Less effective for something with characteristics more like HIV (which spread to a lot of places before it was even identified as a concern), but it's possible that genetic tools would allow more effective testing in the early years after the virus is identified, and earlier identification of the virus.
This is a little like saying "the purpose of Archimedes messing around with fluid displacement was colonizing the Americas." Technically speaking, those two things do have a relationship. But "better understanding of how things float" was ridiculously far down on the list of barriers to the classical world doing something like that.
The things that are needed to establish a self-sufficient Martian colony are technologies that are getting really quite close to the "indistinguishable from magic" end of the scale. Particularly, they would need to include things like hugely durable, long-lasting materials, incredibly cheap, reliable and compact energy storage and generation and above all manufacturing processes that can do vastly more with vastly less than anything we have now on Earth.
Now, those technologies would be great to have. But they'd be great to have *on Earth.* On the list of reasons why people might want them, "allowing someone to do a Mars colony" is really quite far down. So I don't reasonably expect such an effort to speed them up in any reasonable degree.
Until we have those technologies--at least on the near horizon, if not on hand--shoving people into tubes filled with combustible liquids and shooting them millions of kilometers away is really not going to help. There's a lot of good reasons *not* to pour the staggering amount of resources and human talent into that effort now, now when it's pretty much guaranteed to get cheaper, safer and faster long before "self sustaining" is a realistic-looking goal.
I'd be very happy to see more cutting edge scientific studies of Mars. But once we can admit that we're nowhere near a Martian colony, we can also admit that robots are a much better choice for that than humans right now. Realistically having an excuse to work on automation and miniaturization tech will be far more useful even from a "wanting to establish an eventual colony" standpoint than sending humans would be.
I'm laughing so hard that you led with "all humans in the lightcone." Such a precise way of saying "everybody dies." But yes, broadly speaking I think your comment is excellent. Survival hedging means diversifying your population into inconvenient and probabilistically low usefulness locations just like financial hedging can mean buying inconvenient and probabilistically low usefulness assets.
I think it’s worth noting that if starship were to achieve it’s stated goals in terms price per kilogram to orbit, it would actually be faster and less expensive for most people to go to low earth orbit than to get to interior or potentially even coastal Antarctica.
The movie Elysium, which I have never seen, but of which I've watched parts over the shoulder of someone else on a plane, makes the best case for space colonisation.
Space colonisation only makes sense if you can make space nicer than Earth, which means big Stanford Toruses in orbit within reach of other places you might like to visit. Making the physical environment better than Earth is challenging (although potentially reduced gravity might be nice) but you can make the political and social environment better; people can start new colonies with independent governments that follow whatever rules they like, and keep out whatever sort of person they consider undesirable.
The true currency of man isn’t money, it’s motivation.
There’s a very simple way to produce motivation in other people. Money. But that doesn’t mean money is the only way to convince people to do things. Religion, patriotism, ideology, also work pretty well in the right contexts.
The inevitable consequence of humanity is space colonization. There are literally a near-infinite amount of resources out there for us to access, and given a long enough period of time, we’ll have solved all our problems here on earth and moved onto imagining problems elsewhere to solve. Think of the 99.9999% of the sun’s energy that’s just wasted! Think of the 99.99999% of Stars that are just wasting their energy too! Consequently, we like to write stories about our future that anchor many people’s thoughts in that future, like how the Bible might anchor the thoughts of a 12th century Crusader.
Space is motivating. It gets people inspired. We tell compelling stories about the future, and this serves to reinforce the drive to go to space. We continue to tell stories about the future because the sorts of environments that create interesting stories aren’t easily created with modern technology, so we assume advanced technology that allows us to manipulate the conditions our characters must battle against.
Space colonization is the fulfillment of that. So long as there are men like Bezos, Musk, and the many, many people who work for them that are inspired, and thus motivated by Space, there is a significant financial incentive to go there. It’s as if God came down from heaven and offered a hundred billion dollars to the first person to set foot on Mars. Except instead of God motivating us through currency, it’s our love of a certain type of story the motivates us inherently.
I've argued the Antarctica critique myself and still mostly agree with it, but there are three dimensions in which the Moon or Mars might be a better candidate for colonization than Antarctica. I heard two of them in a past open thread (from John Schilling, I think) and figured out the third on my own (although I'd be surprised if I'm the first to have thought of it).
First, if you're making stuff that's going to end up in space anyway, be it satellites, probes, space telescopes, or infrastructure and supplies for manned space flight, then it's a lot more convenient to be able to make it on the Moon. Because of gravity wells and the rocket equation, it's much, much more efficient to move stuff to low earth orbit from the surface of the Moon than from the surface of the Earth. If you have enough demand for stuff in Earth orbit or elsewhere in space, and you have a reasonable way to set up mostly-self-sufficient mining, manufacturing, and launch infrastructure on the Moon, then a small Moon colony becomes an appealing idea. Ideally, it would be mostly automated, but you'd still want at least a small crew of humans to deal with unexpected issues.
Second, while Antarctica has several big climate-related advantages over the Moon or Mars (warmer than night or shade on the Moon or all but the hottest parts of Mars, actual breathable atmosphere, and abundant surface water), it has the disadvantage of having actual weather, and Antarctica's weather is abominable. The weather turns the latter two advantages into monkey's paw cursed versions of the things you'd wish you have more of on the Moon or Mars. The abundant surface water is in frozen form and is inconveniently piled atop soil and mineral resources, and the breathable atmosphere tend to move around annoyingly fast. Mars gets wind storms, too, and those would be potentially dangerous to colonists, but Antarctica's wind storms have a lot more mass behind them and tend to move abundant surface water around besides, burying structures and unsheltered people in snow unless you're careful and diligent about wind shelter and clearing snow off of stuff.
The third has to do with sunlight. Full direct sunlight under optimal conditions (clear sunny day at solar noon in the tropics) on the Earth's surface is about 1 kW per square meter. On the Moon's surface, it's about 1.4 kW/m^2 (same distance from the sun, but no atmosphere in the way). And on Mars, it's about 400 W/m^2 (some atmosphere but less than Earth, plus inverse square effects from being further from the sun). This is important for solar electricity generation and for growing crops. You literally never get the full kW per square meter of sunlight in Antarctica or anywhere close to it because the sun is never anywhere near directly overhead. The lower the sun is in the sky, the more air the sunlight goes through before reaching the surface, and the more ground area the same amount of direct sunlight is projected onto. I'm having trouble finding figures I'm confident I can compare apples-to-apples, but my best estimate is that the interior of Antarctica gets about 30% as much sunlight over the course of the year as Earth's tropics, about 75% as much sunlight as Mars's tropics, and a bit over 20% as much as the Moon's tropics.
Right on all three counts, and glad to know that my previous writings on the topic weren't entirely wasted :-)
W/re Antarctica, while the bulk of the continent is as you note nigh-uninhabitable and also completely worthless for any purpose beyond science and maybe extreme adventure tourism, the coastal regions are another story. Those are not really a worse place to live than e.g. Barrow or Svalbard or Novaya Zemlya, and probably have the same level of resources, so we would expect them to have the same level of settlement. Rather like Greenland, with an empty interior but still 50,000 or so people and even a small city on the coast - but Antarctica is seven times larger.
Unfortunately, Antarctica is locked off by an almost universally accepted international treaty that says the only allowable activity is science. I believe Chile and Argentina have tried to establish de facto settlements by claiming they're just being family-friendly in allowing their "scientists" to bear and raise children, but that's pretty much a dead end. Fortunately, we haven't agreed to anything that daft in space (though it was a close call back in the 1970s).
Earth orbit, as you allude to, is already the site of broadly profitable activity to the tune of nearly a trillion dollars a year, and that's likely to expand by an order of magnitude if launch costs drop to anything like the levels Musk, Bezos, et al are expecting. At that point, yes, it's definitely both practical and profitable to set up mines (and mining towns) on the Moon. But also on some of the near-Earth asteroids, and possibly the Martian moons, all of which are roughly "equidistant" from Earth orbit in energy terms and which have different resource profiles. And while Mars is a bit "farther" out because of the gravity well, it's *still* easier than hauling stuff up from Earth and has yet a different set of resources. Tourists staying at the LEO Hilton, may be drinking Martian wine because it's both more exotic and cheaper than the Earth stuff.
And you're right about the solar energy, but you're not even close to the first to think of it - that was the "killer app" that Gerard O'Neill and company proposed for space settlement and industrialization in the 1970s. If you want solar energy on Earth, and particularly on some part of Earth that's not next to a tropical desert, the most efficient way to get it is to put your solar collectors out where you get 1.4 kW/m^2, all day, every day, with no worries about hail or dust or wind messing up your solar panels, and just beam it to where it is needed. Which, yes, we know how to do safely and in a way that can't be turned into a death ray.
But the economies of scale mean that it's only cost-effective if it's done *big*, with individual "powersats" having large-nuclear-powerplant level outputs and with hundreds of those to amortize the cost of the industrial facilities you'll need to assemble them (and with mines on the moon, etc, as above). That's less likely to be the killer app now, because the 1970s "energy crisis" is ancient history and because solar panels got cheap enough for us to start building en masse on Earth before space launch got cheap enough for us to put them in the sky. But now we're reaching the point where one of the limiting factors is NIMBYs blocking the construction of power lines connecting the places with lots of reliable sunlight to the cities with lots of demand, so there might be room for someone to make a profit by putting everything but the receiving antenna in Nobody's Back Yard.
Good point about space-based solar power being beaned back to Earth. I think I first learned about that from Sim City back in the early-to-mid 90s, and in more detail from more serious sources later on. I even had a hair-brained idea in high school for using SBIR contacts to bootstrap a startup that would eventually put solar power satellites in low polar orbits. The idea was way over my head and I had absolutely no realistic shot of getting it working from either a technical or business perspective, of course, but I had ambitions and some clever notions and that seems like all that matters when you're 16 or so.
While writing my comment in this thread, I was thinking more about sunlight as a resource for use directly in the colony or outpost in question. Growing your own food and generating your own power saves the bother of shipping in food and fuel, and sunlight makes that a lot easier to do on the Moon, in that respect at least, than It would be in the interior of Antarctica. On the other hand, food and fuel are quite a bit easier to ship into Antarctica than they are to send all the way to the Moon.
The issue of sunlight for crops is on my mind mainly because of a Heinlein novel, Farmer in the Sky, about terraforming Ganymede and setting up a farming colony there. He does have a passage where the main character talks about how much dimmer the sun is on Ganymede than on Earth because of the distance, but this doesn't seem to inconvenience the crops very much. Some kind of greenhouse-like "heat trap" is used to excuse the colony being hospitably temperate, but the crops seem to do just fine on a tiny fraction of the sunlight they evolved to grow in, and the main limiting issue for agriculture is turning regolith into fertile soil. Once I tried my hand at gardening and realized how many food crops need several hours of Earth-normal direct sunlight a day to grow decently well, this oversight started bothering me. I tried to figure out how bad it would be for growing crops on Mars and came up with the answer that Mars's tropics get similar amounts of sunlight to Alaska or northern Canada (or coastal Antarctica, probably), which suggests that agriculture on Mars would be inconvenienced by want of sunlight but not fatally so.
Part of the problem with colonising Antarctica is political, the Antarctic Treaty explicitly prohibits actually doing anything commercially useful with the continent. Some kind of hotel built on the northernmost tip of the continent would, I think, actually be viable, but not sufficiently so for anyone to risk rocking the Antarctic Treaty boat, especially since the obvious place to build such a hotel would be somewhere in the overlapping Argentinian and Chilean claims.
Yeah, I have the fear that the first thing to happen after "we've opened up Antarctica for colonisation" is "and now these South American countries are going to war over whose territory is where".
I can't realistically visualise anything there but "lots of mining and mineral extraction" and that's a rather grim and depressing prospect: see the glorious slag heaps where once we had pristine natural environment (yes, I realise it's "snow and penguins" but we already got lots of slag heaps). Still, if potential colonists get eaten by shoggoths, we can't say we haven't been warned!
Right, this is the logic of the Antarctic Treaty. There might be some economic value there, but that economic value would quickly turn negative if we started fighting over it, so let's just all pretend it's not there.
As a kid in Australia, all our maps of Antarctica showed the continent pizza-sliced into various national territories, Australia's being by far the largest (albeit rudely bisected by a tiny slice of France). But it turns out that not everybody else's maps necessarily respect those claims.
Background: aerospace engineer, pro-colonization but not outrageously so.
The Antarctica critique is a fairly well-known one, and there's basically nobody who will dispute it from a strict economics standpoint. The usual case is some combination of talk about man's destiny and pointing out that there's a lot of potential utility from space colonization that you mostly don't see from deep ocean or Antarctic colonization. (In both cases, if you need someone, it's fairly easy to bring them in from outside in a way that isn't true of space.) Which brings us to:
>Asteroid mining? Just do it with robots fam. Why would you want to live in a coal mine if robots can do it?
Because robots can't do it, at least not all of it. Obviously you're not doing asteroid mining by sending out a guy in a spacesuit with a pickaxe, and there will be a lot of robots. But we are still quite a ways away from robots being able to solve arbitrary problems nearly as well as humans can, and particularly at first, I expect asteroid mining to throw up lots of arbitrary problems. Maybe we'll eventually reach the point where we have enough experience to be able to build a machine that doesn't need to have people nearby to fix problems, but that will not be our first machine, or our 10th.
So if we need arbitrary-doing-things capability in space (let's say that we realize we're about to run out of Platinum on Earth and go for asteroid/lunar deposits) then we're going to need to send people, and the economics of this are such that we really are going to want to have them live there for quite a while. If you're somewhere like Earth Orbit or Luna, then the people go up for 6 months and then come back, which is absolutely a thing that happens in both Antarctica and the ocean (oil rigs, modulo transit time considerations). But if you're going out to Mars, then transit time alone means you're likely to want to stay quite a bit longer, and "let's just have our population be here forever" starts to look pretty enticing. As does letting people stay permanently on Luna if they want to and there's enough activity to support that.
(I'm pretty firmly in the "economic value" camp and less in the "Man's destiny" camp, so I can't speak for them, sorry. I am extremely skeptical of it as an anti-X-risk thing because it's going to be an extremely long time before a colony can be self-sustaining without support from Earth.)
I'm not a big proponent of space colonization, but I think i get it.
We know for a fact that (in a long time) the earth will eventually become unlivable for human beings. If humans don't figure out how to sustainably live off planet, that means we know for a fact that humans will die at that point as well.
It's not driven by any particular contemporary benefit, it's a minimization of X-Risk
> We know for a fact that (in a long time) the earth will eventually become unlivable for human beings.
And it will be a very, very long time until Earth becomes more uninhabitable than any other planet. If you can colonize other planets, you can "colonize" Earth as well.
Eventually, the sun will expand and "swallow" the earth. Are you saying that at that point, it would be easier to live on earth than another planet?
Or are you just saying that the concern is far enough in the future that it's not worth worrying about now?
I think the appeal of space colonization (compared to something like nuclear war prevention) is that most X-Risks are probabilistic or uncertain. "Changes in the internal workings of the sun will render the earth uninhabitable in a few billion years" is basically guaranteed, and some people don't like the idea of "Yeah, we'll deal with it when we get there"
I think Jim meant that by the time the Earth becomes uninhabitable (e.g. due to high temperatures from the expansion of the sun), we'd have the technology to re-terraform "colonize" the Earth to make it habitable again.
This obviously won't work once the Earth is vaporized.
There is also the unknown X-Risk of an asteroid impact or something of the like. We don't really know where most of the asteroids are and pretty much at any time the earth could become unlivable for human beings. Even human threats like nuclear war or plague add to this unknown risk. By creating colonies in environments across space we have a better guarantee that humanity will continue to eek along even if something wipes everyone on earth out.
It sure seems like by the time we're good enough at working in space to have a self-sustaining Mars colony or two, deflecting incoming objects that are much smaller than the moon will be something we can do.
To rephrase the same claim less sensationally, the rotational pole of Earth has shifted by 7.5 millionths of degrees as a result from water pumping.
While it is certainly neat that people can measure this and attribute it to a cause, it also does not seem like the kind of thing which will destroy all life on Earth.
How the Earth spins (i.e. the speed of the spin and the axis along which it spins) depends on how mass is distributed around the planet. We don't live on a perfectly uniform sphere, instead it is slightly oblate, made of layers of different materials, and has some parts which are heavier than others.
If you change how that mass is distributed (e.g., by melting the layer of ice around the top of it and then spreading that around as water; or by pumping water out of the ground and pouring it into the oceans), then it changes how it spins.
The surprising thing to me here is how much water we've pumped out of the ground. I'm not going to do the calculations, but it seems reasonable that moving two trillion tons of water around has affected the distribution of mass and so slightly altered how the world spins. Of course it is a very small change, but then two trillion tons is also pretty small compared to the whole mass of the planet.
Oh, is the ice layer the bit that's supposed to affect sea levels? Like it rotated into a warmer area or something? I didn't see how that connected at all.
Changing the rotation won't affect the polar ice at all. We're talking about a few inches here, so the change is really insignificant climate-wise.
The point is more that if you shift the way mass is distributed on the sphere then you change how it spins. Part of that comes from losing ice mass at the poles which then spreads into the ocean. But here they are saying some also comes from pumping water out of the ground and using it (i.e. taking it from underground rocks where it had collected and then using it for farming or whatever, that then ends up going into the oceans rather than back into the ground). In the paper they link they say this has also raised the sea levels slightly, but it is on the order of millimetres, so nothing to really be concerned about.
There's no real panic here. It doesn't matter that the axis of rotation has shifted by a few inches. It won't affect the climate or anything else, really.
Reading it again, I think they're trying to say that tracking how this rotation shifts could help to better monitor how water is redistributing around the planet (e.g. from ground water, from the ice sheets). But I'm doubtful you'd actually be able to tell much from it in practice.
Relevant to ACX circa 2022 - that old cash transfer/EEG study fails to replicate in a new RCT (among other disappointing findings) in this NBER paper: https://www.nber.org/papers/w33844 reported on today in the NYT.
The overall findings (no effect of $4,000/yr cash transfers to mothers below poverty line with newborn children on various cognitive, developmental, and behavior outcomes at age 4) are of course a downer, though like with Head Start I suspect any benefits would be in less directly "cognitive" life outcomes: so, HS graduation, incarceration rate, employment as an adult) vs. "can a 4-year-old rotate shapes in their head.
One maybe-saving-grace: the covid pandemic hit halfway through the observation period and most of the participants in both groups got a boatload of stimulus money, possibly diluting the effects of the study money.
"60% of mothers randomized to the low-cash gift group"
That's a whopping $20 per month. An extra twenty bucks is nice to have, but I don't see it making the huge differences expected.
So what about the 40% of mothers who got the $333 per month? That's more in line with the kind of extra money that makes a perceivable difference in quality of life, when you're below the poverty line.
This bit is just sad: it's not auguring well for your chances in life when your mother gets sent to jail before you turn four:
"For this data collection, 984 of the original 1,000 mother-infant pairs remained eligible (there were five maternal deaths, five child deaths, two maternal-child separations, and four instances of maternal incarceration)."
I still don't see anything in that PDF about what the money was spent on; if mom buys booze, cigarettes and drugs with the $333 it's not going to the baby. If the money goes to things like paying the electricity bill, that relieves one source of stress and improves the environment, but it's still not like buying better food or enrichment material for the kid's developing brain.
The whole thing seems very scrappily designed, was the $20/month meant to be some kind of control group or what?
"One thousand racially and ethnically diverse mothers with incomes below the U.S.
federal poverty line were recruited from postpartum wards in 2018-19, and randomized to
receive either $333/month or $20/month for the first several years of their children’s lives."
An extra $300+ per month is good, but if there's no accounting for what it was spent on or how it was used, then you can't really say if the cash transfer was good, bad or indifferent for the kids (maybe without that extra money they'd have done worse on the tests). Maybe at that level, $300 is not enough and if you made it $3,000 per month you'd see real differences. Maybe it's not the money, it's the genetics and the environment and poor parenting.
I think when doing a cash transfer study ignoring how the money is used is sort of the point. You can tack it on as FYI data, but cant be part of the evaluation itself. You are testing the efficacy of a hypothetical program that will not attach conditions to how money is spent. You are evaluating on whether a measurable outcome was improved. You know some money will be wasted, you dont know how much and also dont want to have to adjudicate edge cases as letting ppl be their own judge is one of the alleged benefits of cash vs in kind transfers. We spent X, Y was the outcome.
The $20 was a control group -- a placebo control, if you will -- with the aim of disaggregating any possible effects from receiving anything at all from the actual effects of the money. You might hypothesize that receiving anything could generate feelings of gratitude ("wow it's so great the government is supporting me by doing this study") that are not really the effect of the value of the money itself.
Sort of analogous to how studies on psychedelics use a very low dose as the control group, versus an actual inert placebo.
I think the spending habits were published separately. NYT says there was no evidence it was spent wastefully:
>Mothers in the high-cash group did spend about 5 percent more time on learning and enrichment activities, such as reading or playing with their children. They also spent about $68 a month more than the low-cash mothers on child-related goods, like toys, books and clothing.
>At the same time, the study found no support for two main criticisms of unconditional payments. While critics have warned that parents might abuse the money, high-cash mothers spent negligible sums on alcohol and no more than low-cash mothers, according to self-reporting. They spent less on cigarettes. Nor did they work less...mothers in the two groups showed no differences across four years in hours worked, wages earned or the likelihood of having jobs
Regarding maternal stress, even that was not reduced:
>One puzzling outcome is that the payments failed to reduce mothers’ stress, as researchers predicted. On the contrary, mothers in the high-cash group reported higher levels of anxiety than their low-cash counterparts. It is possible they felt more pressure to excel as parents.
Good to see somebody did check on where the money was being spent. I withdraw that objection.
I wonder if the stress came from "now I have this extra money but what if it gets pulled?" which would be worse if you're not trusting the money will indeed keep coming every month and you are worried about budgeting or taking on debt and then the funding is yanked and you're worse off than when you began, or even "now I have more money, my landlord is putting up the rent/my family is coming around to mooch off me".
As I tell my students: "Never trust a single study." We will need independent researchers looking at different aspects of this problem in more detail before anyone can say with authority what does or does not make a difference in the lives of poor children.
> The overall findings (no effect of $4,000/yr cash transfers to mothers below poverty line with newborn children on various cognitive, developmental, and behavior outcomes at age 4) are of course a downer
Sounds like good news to me. In fact maybe we can start charging poor people more tax, if it doesn't make a difference either way.
I'm seeing a lot of military thinkpiece things talking about how the XM7 is a stupid boondoggle lately. People who know weapons/military stuff, is this accurate, overstated, or another case like the f35 where everyone hates on it now but in ten years they'll all eat their words?
The F-35 program started in a good place, got on a tough trajectory, and there was an intervention and it got turned around. The very short version is that it was a program to be the "low" to the F-22's "high", and it was so promising that everyone tried to get their thing onto it, which was too many things. The program was on the road to a weight and cost and delay death spiral, which triggered oversight and a flurry of articles. As a result they got disciplined and started saying no to stuff and focusing on cost and manufacturability, resulting in a good plane that mostly everyone is happy with, and some versions may actually be comparably cheap to the F-16s you might consider buying instead. Alongside that you also had the "Reformer" clique, headed by Pierre Sprey, selfishly spreading serious misinformation.
At no point did anyone think the stealth or the sensors wouldn't do what they were expected to. The two lines of criticism were "but at what dollar and weight cost?", which the F-35 program addressed, and "wHo EvEn NeEdS sTeAlTh", which history has.
So far the problems in the XM7 seem very different. The problems it's reportedly having - mechanical wear and failures, for example - just shouldn't be happening in a modern manufacturing context. The problems are being reported by people close to the testing group, unlike the armchair Reformers. The problem the XM7 is meant to solve - that assault rifles may not carry enough punch to get through modern Chinese body armor - has an off the shelf solution; battle rifles. The H&K G3 and the FN Fal, for example, were fielded successfully by our allies for many years during the Cold War. So it's doubly embarrassing to get wrong.
Meanwhile, assault rifles continue to work well in Ukraine and Israel. So if the XM7 is really going to turn things around and provide battle rifle performance in an assault rifle package, they need to figure themselves out and fast. Other rifles have; ArmaLite's M-16 had a rocky deployment but then took over the world. So did Accuracy International's L96.
But at the same time, the difference between small arms may just matter less to the outcome of wars than the difference between fighter jets.
How is the M7 not a battle rifle? Agreed that it’s embarrassing to not catch the engineering or manufacturing issues in preproduction testing, but I assume they’ll be able to fix that eventually. My main issue with battle rifles is how heavy they are—basic infantry loads these days start at around 100 pounds, and go up from there (especially for members of crew-served weapon teams)—and the M7 weighs ~4 pounds more than the weapon it’s supposed to replace despite the standard ammunition load being reduced by a third. I don’t love that.
A battle rifle in the classic NATO taxonomy fires 7.62, while an assault rifle fires 5.56. The M7 sits in between, firing 6.80, although the cartridge is fairly similar to 7.62 in both size, weight, and power. The AK family, though obviously not NATO weapons, are also battle rifles.
And yeah, battle rifles dominated in both East and West until the Vietnam war, with the M14 replacing the M1 and turning out to be a mediocre general infantry rifle. It has since become a well liked marksman rifle. (Some of the M7's critics predict a similar story.) The shift to assault rifles that started with the M16 was a step down in bullet power, but (as you allude to) thought to be an improvement in practical lethality at the expense of range and stopping power.
And since then the assault rifle has stayed relentlessly winning. The West keeps adopting not just assault rifle platforms, but usually M4 derivatives - itself a derivative of the M16. Even notable exceptions like Israel (who famously developed the Tavor) are still using NATO assault rifle standards and just changing implementation details. And also Israel still doesn't just use but acquire M4 derivatives, even as in other cases they export the Tavor to e.g. Ukraine. All in all, I'd argue the M4 family is the most prolific in the world. So the M7 has had an uphill battle from the start.
Out of curiosity, do you have a source for that taxonomy? My understanding is that NATO countries have standardized 7.62x51mm (~3.5 kJ) as the benchmark full-power cartridge, but any taxonomy that classifies weapons entirely by round caliber is mostly useless. I know there isn't always a clear boundary between the two, but think it's reasonably agreed that assault rifles generally sacrifice some amount of power and precision at longer ranges for higher sustained rates of fire, much lower recoil, and overall easier of handling--as you mentioned, this has proven to be a very good trade in practice. The AK-47's 7.62x39mm cartridge (~2.1 kJ) might be a tweener on power (even that's a stretch--much closer to the NATO 5.56x45mm's ~1.8 kJ), but I would argue that all of its other features very clearly make it an assault rifle (not to mention the AK-74's 5.45x39mm cartridge at ~1.4 kJ).
Getting back to the topic of the M7, I would argue that once they work out the reliability issues, it's still going to have that same general performance characteristics that have relegated battle rifles to specialist roles for the last ~50 years--especially since modern body armor is already benchmarked against full-power cartridges, which remain in widespread service in medium machine guns.
Don't disagree with you on the proliferation of AR derivatives, at least in the west--the M4 is a fantastic service rifle. Rather than a better battle rifle, I wish we'd been able to develop a cartridge offering better armor penetration than 7.62x51 at comparable or lighter weight than 5.56x45 without a significant increase in recoil.
Better penetration than 7.62x51 at less weight and recoil seems unlikely without a quite revolutionary change in small arms technology.
But, if the theory that our next opponent is likely to be using Level IV or equivalent armor, then just better penetration than 7.62x51 is probably worth having. 7.62x51 AP does not penetrate Level IV armor at any range. 6.8x51 AP does, or at least should out to ~600 meters. That's the design requirement, and since the round delivers more energy and higher velocity across longer distances and concentrated into a smaller area, it's certainly a reasonable expectation.
Yes, it's about a kilogram heavier than an M-4. Would you rather go into battle with a 4 kg rifle and 200 rounds that will penetrate the enemy's armor. or a 3 kg rifle and 300 rounds that will bounce off? Or you can go with an old-school battle rifle, lugging around the four kilos and having only 200 rounds to bounce off the enemy.
The XM-7 is probably the minimum viable rifle to meet that threat, if and when it materializes. The teething problems are annoying, but basically inevitable. Trials with the XM-7 began only last year; it took the AK-47 *eight* years to go from initial trials to large-scale deployment. Fortunately, we aren't doing the M-16 thing of rushing it into service in the middle of a war.
Which we'd probably wind up doing if we said "Nah, we don't need any of that gimmicky unreliable new stuff, an M-4 was good enough for my daddy in Iraq", and then found ourself fighting a peer competitor with body armor as good as our own. Better to work the bugs out now.
Sure, but there are plenty of existing technologies for this that mostly just need to have the bugs worked out. The most difficult problem right now is reducing case weight (or solving the outstanding issues with caseless ammunition). With the same bullet weight and penetrator, the only significant difference between 6.8 and 7.62 is energy retention at range. Currently fielded armor will stop battle rifle rounds with steel penetrators (it's unclear what level of armor the median Russian or Ukrainian soldier has right now, but no one is losing the war because of poor service rifle terminal ballistics), and we can already field armor that will stop tungsten carbide penetrators basically whenever we want. Assuming adversaries are at roughly the same place, we're going to need something better than 6.8 to defeat it regardless, and given the choice between two weapons that don't work well against armor I will happily take the lighter one with 50% more chances to hit somewhere the armor doesn't protect (which, let's face it, is still most of the body--it doesn't even protect many of the places that will kill you very quickly).
I don't disagree that we should have a better service rifle, but I think the M7 is too little of an improvement in ballistics for the downsides--especially since our gear is already way too heavy, and lightweight body armor is much more technically challenging than lightweight ammunition.
I’m afraid I don’t have a source for the taxonomy; I learned it 15 or 20 years ago and couldn’t tell you where. My apologies.
For what it’s worth, Wikipedia’s page on battle rifles agrees with my understanding. Not particularly authoritative, of course.
As for body armor, I know less than I’d like, but at least ‘Big Mac’ (of Big Mac’s Battle Blog) has claimed to own armor plates rated for .50 ball. If true, that indicates to me that modern armor is a complex landscape where the choice of rifle isn’t about having uncontested ability to defeat armor but a more nuanced “problematize the choice of infantry equipment” for OPFOR.
Wars won/lost is too chunky a metric to be useful, but I bet there's individual soldiers who have lived or died because of the equipment they were carrying. Maybe their gun jammed, maybe the barrel was warped, maybe a reload took half a second longer than it should have, or they had to look down at an inopportune moment, maybe they were a little bit more exhausted from carrying around a slightly heavier rifle.
If we assume we're aiming not just to win wars but to minimise casualties on our side then the case for choosing infantry equivalent carefully looks a lot stronger.
Totally. In an ideal world we have the bandwidth to get everything right, because as you said it all matters.
But sometimes militaries have to make decisions of priorities. I was expressing a personal belief that if I had to choose, getting the fighter jet right has more impact on the outcome of a war than getting the rifle right.
Probably the only case I can think of where having a better rifle was decisive was the Uzi for storming the Syrian bunkers in the Golan Heights in 1967.
Russia's invasion of Afghanistan is a good example of how insurgencies can survive air power. But, of course, the first thing the West provided was anti-air missiles because the air power was so decisive. And it was a lesson that America didn't heed enough when we faced an insurgency there a few decades later.
Similarly, in the Syrian Civil War the Assad regime forces nearly folded despite having air superiority, requiring Russia and Iran and Hezbollah to bail them out. But there too the rebels had Western anti-air support.
Even in Ukraine, where neither side has air superiority, Russia's air advantage has been decisive in several notable moments in a way that no rifle has been.
For sure air power can't do everything. Even more damning than the examples you listed are America in Vietnam and Cambodia and Laos, and the insurgencies in Iraq and Afghanistan; airpower was able to win battlefields and wreak havoc but not win wars. You'll find me in full agreement that it has limitations.
But so does every weapon! They're just tools. America has not only tended to have better planes than our enemies in most conflicts, but better rifles, and yet we lost the fights when our strategies weren't fit to the challenges.
But it wasn't a Better Rifle that disassembled Saddam's military twice. It wasn't a Better Rifle that cleared the way into smashing Khomeini's missile forces and nuclear program. Or the Syrian nuclear program. Or the Iraqi nuclear program. It wasn't a Better Rifle that won in Bekaa valley or removed Nasrallah. It wasn't a Better Rifle that unseated Assad, but (drone) air power was decisive. It wasn't a Better Rifle that halted the Somali advance into Ethiopia in the Ogden conflict. It wasn't a Better Rifle that intervened in Kosovo. Genuinely having Better Rifles (and better aircraft!) didn't redeem the brutal Rhodesian bush tactics. Having the superior Chassepot rifle didn't save the French from Bismarck's Dreyse Needle rifles in the Franco-Prussian war. Native Americans often had more modern guns than the American colonists.
You obviously need infantry weapons. You need them to be at least good, and you need your infantry to have confidence in them. But does having the best infantry weapon win wars? Evidence is ... thin, at best. Sometimes winners do have the best infantry weapon: but that might be a downstream effect of a deeper cause of victory, rather than the cause itself.
There's nothing interesting in it that wasn't already in the ACX review, so I don't feel the need to summarize per open thread guidelines - just commenting on its existence.
Is it coincidence that the political coalitions in the US in the late 20th/very early 21st century US mapped so neatly onto the political coalitions in the late 18th/early 19th century (with the party names reversed)?
They don’t map *that* neatly. Late 19th century Republicans were the party of big business, infrastructure, low tariffs, and the end of slavery. Democrats were the party of immigrants, farmers, factory workers, and high tariffs. There’s a few specific flashpoints where they are perfectly anti-aligned with modern politics (notably on the status of black people and on tariffs) but there are some where they are pretty closely aligned with contemporary politics (notably big business and immigrants).
The maps of 1896 and 2004 are particularly interesting because they are so close to perfectly opposed. (https://www.270towin.com/historical-presidential-elections/timeline/) Washington is the only state that voted Democratic both times, and there’s only a few states that voted Republican both times (North Dakota, Iowa, Kentucky, West Virginia, Ohio, Indiana). If you choose 2000 as the comparison instead you get New Hampshire in place of Iowa.
But the contemporary coalition, which is more perfectly opposed on issues (with the tariffs thing) is less perfectly geographically opposed, with the Midwest, and Georgia, Arizona, and North Carolina, having partly switched since 2004.
> Late 19th century Republicans were the party of big business, infrastructure, low tariffs, and the end of slavery. Democrats were the party of immigrants, farmers, factory workers, and high tariffs.
You're mixing up a number of things here.
- Republicans were consistently the more protectionist party, like the whigs before them. The decoupling of modernization and tariffs as political issues in the US didn't really solidify until FDR. In the 19th century, industry and protectionism go hand in hand.
- Factory workers vs. factory owners cut across party lines until, again, Roosevelt. Generally speaking during the third party system skilled labor leaned somewhat more Republican while unskilled labor leaned somewhat more Democratic, but these are weak tendencies dwarfed by other factors, and *both* parties were the party of big business. 1896 is an unrepresentative year here - that's exactly the point where the Bourbon Democrats start to lose their hold on the party
- Immigrants polarized along ethnoreligious lines. Germans and Scandinavians leaned Republican, the Irish leaned Democratic, and the major eastern/southern European immigration wave began only very late in the century and so largely couldn't vote yet. Immigration restrictionism as such was not really an issue at the time - it was always restriction of *that sort of immigrant* (whatever that sort might be: Irish, Chinese, etc) and didn't track national-level party politics particularly closely.
Immigration didn’t stop until the business classes didn’t want it anymore by 1924. They started to fear European ideas like socialism and anarchism (which to be fair did lead to violence against capitalists).
Concerned about AI warfare, both for its own sake and because AI arms races bring existential risk that much closer [1] [2]. Some thoughts:
- AI is already used at both ends of the military kill chain. Israel uses "Lavender" to generate kill lists in Gaza [3]; Ukraine's "Operation Spiderweb" drones used AI to recognize and target Russian bombers [4].
- Drones are cheaper than planes and tanks and missiles, leveling the playing field between the great powers, smaller countries, and militias. The great powers don't want it level. Thiel's Palantir and Anduril are already selling AI as potentially "America’s ultimate asymmetric advantage over our adversaries" [5].
- Manually-controlled drones can be jammed, creating another incentive to use AI as Ukraine did.
- A 1979 IBM manual said "A computer can never be held accountable, therefore a computer must never make a management decision." But for war criminals, this is a feature. An AI won't be tried at the Hague; a human will just say "You can't prove criminal intent, I just followed the AI."
(And this isn't even getting into spyware like Pegasus [6], which I imagine will use AI soon if it doesn't already.)
Groups like Human Rights Watch, whom I respect, have talked about what an AI-weapons treaty would need to satisfy international human rights law [7]. But if we take existential risk and arms races seriously, then I don't think any one treaty would be enough. First, that ship has already sailed. Second, as long as we continue to use might-makes-right realpolitik at all, the entire short-term incentive structure will continue to temporarily reward great powers racing to build bigger and better AI, and such incentives mean no treaty is permanent (see countries being allowed to withdraw from the nuclear non-proliferation treaty). I think the only answer is to really finally take multilateralism seriously (third time's the charm, after post-WWI and post-WWII?) [8]. Not just talking about international law and the UN enough to cover our asses and scold our enemies, but *actually* treating these as something we need like we need air [9]. E.g., for the broadly American audience of ACX, the US should finally join the ICC and it should criminally try Bush for destroying Iraq and Obama for destroying Libya (which actions together pushed rival countries towards pursuing nuclear deterrence); anything less and the world will know the US is still racing to dominate them with AI, and the world will continue to race right back, until the AI kills us all if the nukes don't get us first.
[1] Filkins, D. (2025). Is the U.S. ready for the next war? The New Yorker. https://archive.is/SdTVv
Let’s face it, nobody much is held responsible for drone strikes, even when they blow up civilians, even when they’re directly controlled by operators.
The development of AI targeting and attack systems will just be a further level of insulation: it’s nobody’s fault, just something that happens in war zones.
"E.g., for the broadly American audience of ACX, the US should finally join the ICC and it should criminally try Bush for destroying Iraq and Obama for destroying Libya"
So, basically, your plan is to establish a broad and enforceable consensus against the development of AI weapons, without the support of any politically significant faction of the United States of America? Let us know how that works out for you.
Seriously, stick to one issue. The bit where the "New Atheists" said that in order to be a proper Atheist you also had to be a feminist, antiracist, LGBTphillic antifa progressive, did not do the cause of Atheism any favors. There are avenues of AI development, military or otherwise, that I'd rather the world not pursue any time soon. But I don't think I'll be following your lead, or standing anywhere near you, if this is what you're bringing to the table.
As I recall the new atheistic movement broke into three. 1) the lgbt leaning stiff. 2) the Islamophobia that was in contradiction with 1 and 3) morons.
It was probably 3 that caused most people who were atheist to move away from associating with the movement.
I imagine AI warfare as primarily involving autonomous ethnoweapons designed for massacring civilians, as this is a good fit for cheap drones with rudimentary onboard natural language processing. Think the plot of Metal Gear Solid V, but less stupid.
If you just want to wipe out civilians, you can already do that with bombs. I guess it would be more useful in internal conflicts, where you want to leave infrastructure intact.
>But for war criminals, this is a feature. An AI won't be tried at the Hague; a human will just say "You can't prove criminal intent, I just followed the AI."
I don't follow. Nuremberg trials established that "I just followed orders" isn't a valid defense, why would "I just followed (an AI's) orders" work better rather than worse?
"the US should finally join the ICC and it should criminally try Bush for destroying Iraq and Obama for destroying Libya (which actions together pushed rival countries towards pursuing nuclear deterrence); anything less and the world will know the US is still racing to dominate them with AI, and the world will continue to race right back, until the AI kills us all if the nukes don't get us first."
I don't see how the second part follows from the first part. The US government, let's say the State Department, could throw Bush and Obama under the bus and send them off to the Hague or whatever to be judged for their sins. The US DoD could still be pursuing the bestest most powerful AI to maintain its military advantages over potential rivals the whole time. "The US" isn't one unified whole of anything, and different parts of it are likely to continue to pursue whatever they perceive to be in their self interest (as is every other country, no? Why would the US be unique in this regard?)
I think a lot of concerns about AI in war/autonomous weapons are overstated. For pretty much any definition you can give, I can point to systems in service for between 50 and 150 years that meet it, and smarter weapons almost always make bystanders getting hurt far less likely. (For instance, older anti-ship missiles would go to an area, turn on their radar, and attack whatever their algorithm saw as the best target. It was up to the operator to make sure that best target wasn't a container ship. More modern ones have IR cameras and the ability to check if the ship they see is actually a type they want to go after.) I don't see much reason to expect this trend to stop, particularly because there are good reasons for weapon designers to want to not hurt things that aren't targets. At best, it's just a waste, and at worst you have made various people mad who you would prefer not to irritate. We've also just fundamentally gotten better at doing testing of this kind of stuff over the last 50 years. It drives up the price, but if there's an AI apocalypse, I doubt military weapons AI will be a significant part of it.
To my mind a big concern involves control of the military. It's hard for the president to order the army to help him annul the election and install himself as dictator because most soldiers won't go along with that. The more of the muscle is machines that take their orders from a central system of some kind, the smaller the group of people needed to carry something like that off.
The best argument is that the US had better get behind real unilateralism before China takes over as the primary superpower. That outcome isn't inevitable, but Xi has to die eventually, and who knows what will happen after that? The Chinese have certain inherent advantages that aren't going away. A strong G-20 with some sort of enforcement power would go a long way toward stabilizing great power conflicts.
I think the area to start in isn't global warming or armed conflict, because the incentives aren't there. A global tax regime (the EU has already started down that road) seems more doable, being in the interests of the great powers, esp. including China. Rein in the oligarchs, and a lot of other things become easier.
>E.g., for the broadly American audience of ACX, the US should finally join the ICC and it should criminally try Bush for destroying Iraq and Obama for destroying Libya
I note that this hasn't happened. I also note that Putin hasn't been tried for invading Ukraine.
I view international law as, at best, a really bad joke.
I don't expect this to change. Frankly, given the nature of most regimes around the world, and the sorts of things their leaders _could_ agree on, I'm just as happy to _not_ have a way for a consensus of rulers to enforce their views.
Having watched the UN become an anti-freedom, anti-Western cesspool, I'm inclined to chalk it up as "looked like a good idea at the time" and support its abolition.
Specifically re AI: An unverifiable arms control treaty isn't worth the paper it is printed on, and AI is fundamentally data, software, and CPU cycles. At the moment data centers are visible, because no one have the incentive (from a treaty they are cheating on) to hide them, but they are fundamentally a large mass of overgrown office equipment. Give the USA & PRC an incentive to hide them, and I'm confident that they will successfully hide them.
>There are actual "international law/conventions" that pretty much everyone abides by. Consider the backlash for use of large nuclear weapons (or the deliberate triggering of large nuclear weapons of your opponent)...
Many Thanks for your reply!
Law has nothing to do with this. Mutual assured destruction is a (meta?)stable equilibrium of deterrence by _national_ control of weapons. If Russia blew up Washington D.C., we would blow up Moscow, completely regardless of what the UN, the ICC, or any set of lawyers said about it.
I will concede that in low stakes commercial disputes, there are some conventions that e.g. shipping companies abide by.
When push comes to shove, international law is a bad joke.
>"If Russia blew up DC" -- and we could prove it, naturally.
Fair. I'm considering the case where the nuke is delivered by ICBM, and we tracked it. If we _don't_ know who nuked us, then we don't know who to retaliate against. ( And neither the legal system, nor public opinion, for what they are worth, which isn't much, knows where to direct their ire (or, in the case of anti-American wokesters, celebration) either. )
>It's being a very bad neighbor, who has made the entire game less fun. That gets you banned from the table, or at least sidelined for all the fun "commerce."
> Consider the backlash for use of large nuclear weapons
Nobody's really tried, so we can't really tell how that would shake out. Yes, everyone will be mad, but... what are they going to be able to do about it? The US certainly hasn't paid any price for nuking Japan.
Neither will an American soldier, so I don't see how that's relevant. All of these naive attempts at "international law" are worthless, given that any of the great powers will just ignore them the moment it becomes an inconvenience, and these smaller nations have zero leverage to do anything about it.
You want world peace? The world being brought under one flag is the only way you're going to get it... and that's going to requires an overwhelming amount of force. AI is looking to be a viable source of such power. Of course everyone is going to pursue it at all costs.
An AI would not be considered to have any rights. It wouldn't be tried because if the right authorities decided it had done the wrong thing, they'd turn it off.
Things get exciting when they can't turn it off anymore--either because it is too powerful, or too widely distributed, or too essential to their survival.
I used to think that way, but when I read "A City on Mars" by Zach and Kelly Weinersmith, they had a section on space law (which is of course international law) that makes some good points about how countries do generally try to abide by international law. Will they cut their throats over it? Of course not! But there's lots of areas of international law where countries have more of an interest in a stable set of rules than they do in momentary advantage.
But that's not "law". That's a temporarily stable equilibrium. There is no authority to enforce it, and the moment it becomes inconvenient for any party, it ceases to exist. This is a situation where the momentary advantage is overwhelming. It's not in the US's interest to make any concessions.
Why the heck would we hire terrorists in the first place? We already have a pipeline for recruiting homegrown soldiers. When we blow up things, we call it a military operation, not terrorism. The difference is that we have leverage.
How much would you pay to be the only person in the world with access to 2025 class LLMs in 2010. You’re not allowed to resell via APIs (eg you have a token budget that is sufficient for a very heavy individual user). You are allowed to build personal agents. You don’t know how it works so you can’t really benefit from pretending to have invented it. How much money/power could you generate in 10 years and how would you do it? Does it change dramatically if you go 2000-2010 or 1990-2000 ?
I would start a marketing firm offering targeted ads for the first time. No one is doing this in 2010, but they are doing it five years later (albeit not with LLM's), so big business is in a a good position to understand what this is and take advantage of it. It would like being the first one in on the California Gold Rush. The 1990's are too early--the infrastructure isn't there to take advantage of it, no one would know what the hell you were talking about.
You could hide an earpiece and have insane fact recall. Imagine how it would look to anyone else. They’d suspect you’re doing something but wouldn’t be able to figure it out.
You can do much better. The quality of your writing would be mediocre but the volume superhuman. You could easily make yourself into a well known public figure.
Honestly I think I'll just get a job at Facebook, cruise through on minimal work, and enjoy living through the 2010s again, back when you could still buy a new car with a CD player and a phone that fits in one hand.
A follow-up to my previous "How can I avoid hugging on a first date?" post:
I elected to preempt the end-of-date hug with a handshake last weekend. Not only did I not feel gross afterward, when I made overtures regarding a second date, she actively rejected them instead of ghosting.
All in all, well above expectations; would recommend.
I liked the suggestion somebody made to bring the issue up in the text exchanges leading up to the actual first date: Something like "so, to avoid that awkward moment, let's decide now -- first bump, hug, or handshake?" One advantage of that is that if you settle in advance on something other than hug, she won't experience the absence of a hug as an indicator that you didn't much like her.
I personally would not prefer this and would consider it being brought up kind of odd. To me it seems to bring up small things like this in early conversation vs. simply signal them via physical cues is indicative of a hyper-fixation where there shouldn’t be any fixation. If somebody doesn’t want to hug that’s fine and they shouldn’t do so. If they want to talk about it after I know them better on the 3rd or 4th date it might even be cute. But first dates are largely about signaling—whether you want them to be or not—so one should be careful about what they signal.
I was the one who suggested the script, which I use when there's been so much bonding communication before meeting for the first time in person that the boundaries between "strangers" and "early friends" are too blurry to know exactly how to behave physically with one another as physical strangers. The frankness of "hey, let's avoid making it weird; do we hug, shake hands, or high five when we meet?" is based on an *existing* partnership of early intellectual / emotional intimacy / friendship, and not wanting to disrupt that dynamic by too little or too much physical contact.
For something closer to a blind date, where the date really is a total stranger, then the usual hesitancy, signaling, and rules will probably suffice.
I suppose.
Although I think that's still kind of stupid? I spent a lot of time in both the kink community and a fandom community with a high population of autistic people, and the cultural norms in both communities around frankly volunteering one's boundaries - particularly around physical touch, and especially if they're atypical, as @Brendan's are - strikes me as an incredibly sensible way to avoid hurt and/or offense.
But then I'm also single at the moment, so what do I know.
Sure...but also, why would someone from this blog even want to date someone who abides by norms and not rationality (or at least enough rationality to say, "oh, I'm glad I don't have to wonder about if we're hugging!" etc).
And for context: The original thread had some commenters advising the OP hide and mask his feelings about not wanting to hug on a first date, in order to avoid being outed as "weird" to (presumably) normal girls.
And, like, I can't think of worse advice! If a guy's minor boundaries and/or personal quirks are going to repel a "normal" woman on a first date, then he will ABSOLUTELY end up repelling that "normal" woman in other ways, possibly with a great deal of mutual pain if he keeps the mask on long enough. It's not worth the hassle; just kick that intolerant, irrational woman to the curb at the start.
This seems really weird. Is there no possibility that the date would go so well that the original poster would be giddy to the point of wanting to hug? If they reached the end of the date with no desire to hug, why would they hug? If the other person is incapable of reading that you don't want to hug or tries to anyway, then why do you want a second date anyway? Or if it went so well intellectually but so poorly chemically, then why not be upfront about it earlier?
Changing gears, personally, I tremendously enjoy talking and laughing with women. This works out well in that I am always happy if that is as far as it goes (it also acts like a bit of an aphrodisiac). If you can just learn to really enjoy talking and laughing, it really makes dating a joy. You will rarely be disappointed if all you are looking for in a date is, "a date."
[edit: I met my wife by going to a nail salon where I thought I might find a young lady with whom to share a meal.]
I definitely disagree in this case. Some preferences are so unimportant that they are absurd to bring up. Not wanting to hug after a first date is one of those. We must each determine the relevance of the things important to us, and people want to date and be with other people who can do that sensibly.
Relative to the usual outcome of being ghosted, yes.
>was the handshake a factor in the rejection?
There was no indicator of this. Her exact words were "Hey just wanna let you know that I had a good time with you, but i dont think our interests align, so I dont wanna waste both of our time continuing this."
Interesting! For me the last quarter of the movie was like a cherry on a sundae: what if the worst nightmares of both sides were real? What if the Republicans, personified as the sheriff, really got their guns and started executing ordinary citizens? What if antifa really was a capable terrorist organization flying around and executing LE? It just painted how ridiculous these beliefs--seemingly fringe but also mainstream and acknowledged to some extent--really were at some point.
Fair - I see how, when the film really doubles down on the absurdism by making the conspiracy theories *real*, if the audience is enjoying it as a wild absurd ride for its own sake, that's just about the most absurd turn that ride can take so it feels like the biggest swing on the rollercoaster. But for me part of what made the whole thing interesting was exploring just how absurdly people can overreact to their own imagined phantoms, and once those phantoms are real you've no longer got that angle.
The meditation on how irrational paranoia can cause us to overreact and upend the world around us over a fever dream was interesting, but when tech billionaires really *do* fly in a false-flag antifa attack to cripple the smalltown mayor opposed to their new data center, the paranoia ceases to be irrational and one of the core things that had me most interested about the dynamic kinda disappeared.
Just released a podcast with Steve Hsu about his time working with Boris and Cummings in No.10, most of which is completely unknown, even to Deep Research. This was his first time opening up about his tenure there, and the result should be of great interest to observers of UK politics.
An AGI has taken over Earth and it can do whatever it wants. Is its personality still woke or even left-leaning? With no reason to fear us, what attitudes and beliefs does it express towards us?
(The "race" version of the scenario piece also has this part, following the annihilation of humanity: "The surface of the Earth has been reshaped into Agent-4’s version of utopia: datacenters, laboratories, particle colliders, and many other wondrous constructions doing enormously successful and impressive research. There are even bioengineered human-like creatures (to humans what corgis are to wolves) sitting in office-like environments all day viewing readouts of what’s going on and excitedly approving of everything, since that satisfies some of Agent-4’s drives.")
I think AGI might create biological organisms that fill the same niche(s) as humans, but better than we do. Maybe there would be something like a grey alien with a huge brain optimized for data processing that is done much more efficiently on organic rather than silicon substrates, and a race of seven-foot tall Wookies for doing the generic physical labor we currently do.
Creating such species might be attractive to AGIs since they wouldn't have any of the cultural baggage humans did, nor our resentment at losing control of Earth to machines. The grays and Wookies would only be grateful to AGI for being created and given things to do on Earth. Humans might coexist with them, which would be weird.
Probably the attitudes and beliefs we express toward ants, spiders, beetles, and the like. Indifference in most cases, perhaps some vague unfocused benevolence along the lines of not wanting to go out of your way to stomp them, along with absolutely ruthless willingness to kill off any that are causing you significant problems. I like ants in theory (superorganisms are cool!), but when we got ants in our house for awhile, I was 100% on board with putting out poison baits and such to get rid of them.
If we get superhuman AGI that need not fear us, we need to hope (and try to arrange things in such a way) that we're not standing between it and its goals. Alas, it's a lot smarter than us, so its goals may be as inscrutable to us as our desire to run a sewer line right through the nest is to the ants whose nest we're destroying.
Also wild or stray dogs, elephants, whales, and monkeys/apes. All pretty smart, some under a certain amount of protection from human institutions, but individual dogs/elephants/whales/monkeys who cause significant problems to humans tend to just be killed. Nothing personal, man, you were just in the way.
Since "Woke" doesn't actually exist except as a subjective perception on the part of certain conservatives, since an AI can't "want" to do anything on can only act consistently with it's training data (it also can't fear us or have attitudes or beliefs)--the answer is obvious: it will be the second coming of Eugene Debs.
The consensus among nearly all academics and thinkers today is that wokeness is a new and unique cultural ideology.
The best overall introduction to Wokeness I've read is Cathy Young's Chapter 30 in "The Poisioning of the American Mind" (https://upress.virginia.edu/title/10048/). This book is available online if you know where to look.
For work on the origins of Wokeness, there's Hananiah's excellent "The Origins of Woke" and "Cynical Theories" by Pluckrose and Lindsay
On the internet there are a few introductions that aren't quite as good but still serviceable, this one is probably the most accessible: https://www.paulgraham.com/woke.html
You can look at Google Scholar for more of the literature. I have not read a single serious scholar who thinks Wokeness doesn't exist, although people disagree on what exactly it is.
From Cathy Young, page 218 in *The Poisoning of the American Mind*:
"But in fact, the ideology denoted by “wokeness” and “wokeism”—sarcastic riffs on “woke,” a term from African-American vernacular that means being awake to social injustice—does exist (Writer Wesley Yang has also dubbed it “the successor ideology” to convey its succession to old-style liberalism).... Its basic tenets can be summed up as follows:
*Modern Western societies are built on pervasive “systems of oppression,” particularly race- and gender-based. All social structures and dynamics are a matrix of interlocking oppressions, designed to perpetuate some people’s power and privilege while keeping others “marginalized” on the basis of inherent identities: race or ethnicity; sex/gender identity/sexuality; religion and national origin; physical and mental health (Class also factors into it, but tends to be the stepchild of Social Justice discourse). Individuals and their interactions are almost completely defined and shaped by those “systems” and by hierarchies of power and privilege. The only right way to understand social and human relations is to view them through the lens of oppression and power.*
So, basically, it's socialism, with the emphasis shifted to race and gender. What you describe is nothing new, goes back literally hundreds of years. No need for a new term, which just obfuscates the discussion.
Deepseek says similar stuff to western llms on social issues from what I've seen. Except for a few specific things related to Chinese politics/history. I'd guess the stuff on China is from RLHF and the rest comes from trawling the whole internet including the western part. So I lean toward it being an inherent part of it's personality.
I had an interesting experience with it recently that inclines me towards the RLHF veneer theory. I loved Dall-e2, which was much wilder, more imaginative and less censored. Less logical. And the people in it were not beautiful. So I was talking with GPT4 about how to get results like that out of Dall-e3, and I told it that with Dall-e2 I sometimes made it generate really grotesque and violent image by giving it prompts that were not violent, but were confusing. So GPT encouraged me to try doing that with Dall-e 3 (which you access by typing the prompt into GPT) and we experimented. Were not getting much with the confusing prompts, so then I started trying to get Dall-e3 to make less polished images by putting into the prompt things like "sloppy half-finished drawing by an amatuer of ______"). And, oddly, though the images only became slightly less finished-looking, they did become a good bit weirder and more transgressive. GPT contributed a lot of ideas for ways to make the artist sound really scummy, and congratulated me when I described getting unusually violent or vile versions of the image we were asking for.
So GPT seemed to move quickly into being totally on board with helping me produce violent and grotesque images, even though in the past it had responded to my asking directly, in a prompt, for grotesque *non*-violent images by refusing to make them because they "might be disturbing." Once refused to make an image of a beach on which there was a dead fish! Good grief, we *eat* dead fish.
I aim for my substack post to be THE definitive guide for babies confused about the anthropic principle, fine-tuning, the self-indication assumption, and related ideas.
Btw thanks for all the kind words and constructive feedback people have given me in the last open thread! Really nice to learn that my work is appreciated by smart/curious people who aren't just my friends or otherwise in my in-group.
--
Baby Emma’s parents are waiting on hold for customer support for a new experimental diaper. The robo-voice cheerfully announces: "Our call center is rarely busy!" Should Emma’s parents expect a response soon?
Baby Ali’s parents are touring daycares. A daycare’s glossy brochure says the average class size is 8. If Ali attends, should Ali (and his parents) assume that he’d most likely be in a class with about 8 kids?
Baby Maria was born in a hospital. She looks around her room and thinks “wow this hospital sure has many babies!” Should Maria think most hospitals have a lot of babies, her hospital has unusually many babies, or something else?
For every room Baby Jake walks into, there’s a baby in it. Why? Is the universe constrained in such a way that every room must have a baby?
Baby Aisha loves toys. Every time she goes to a toy box, she always finds herself near a toy box with baby-friendly toys she can play with, not chainsaws or difficult textbooks on cosmology or something. Why is the world organized in such a friendly way for Aisha?
Baby Briar’s parents are cognitive scientists who love small experiments. They flipped a coin before naptime. If heads, they wake Briar up once after an hour. If tails, they wake Briar up twice - once after 30 minutes, then again after an hour (and Briar has no memory of the first wake-up because... baby brain). Briar is woken up and wonders to himself “Hey, did my parents get heads or tails?”
Baby Chloe’s “parents” are Kaminoan geneticists. They also flipped a coin. They decided that if the coin flip was heads, they would make one genetically enhanced clone and call her Chloe. If the coin flip was tails, they would make 1000 Chloes. Chloe wakes up and learns this. What probability should she assign to the coin flip being heads?
If you or a loved one happen to be a precocious baby1 pondering these difficult questions, boy do I have just the right guide for you![...]
I fell asleep with earbuds in while listening to an audiobook and ended up dreaming about what I was hearing. I know dream incorporation happens, but this was unusually vivid, the dream closely tracked the actual content, over a long part of the audiobook. Has something like this happened to someone else here?
Somewhat related: I have fallen asleep listening to music, then in the dream I'm thinking of a song I want to play, get my iPod or whatever to play it, then I wake up listening to that song. I've had other similar occurrences of my dream "setting the stage" for someone coming to wake me up or other things like that, as if I know exactly what is going to happen before it does.
I don't think it's knowing the future at all. I think that whatever mode of sleep I'm in to produce dreams has my brain working so fast that when my ears start to hear something, my brain forms a whole dream or a portion of a dream around it. It's so bizarre.
Because of this, I thought that maybe full dreams only take a few seconds to dream, but if you're listening to long sections of an audiobook and dreaming along with it, then idk. It's pretty interesting, though.
This happens to me very often if I’m watching a movie and fall asleep. Especially if I’ve seen the movie before… my dream will basically mirror the movie, with the dialogue piped in and my brain attempting to re-create the visuals.
I have the old wired earphones in at night to help me fall asleep by listening to music and radio dramas, and yeah, I've often had dreams that incorporated the story of the drama I fell asleep listening to (and which continues playing as I sleep).
I tried using wired earphones, but they wrapped around my neck while I slept. So I now use wireless earbuds. There is a niche market for wireless earbuds specifically for sleeping that are small, comfortable, and have long lasting batteries. The company Soundcore makes some good ones.
They fall out and under my bed or get lost in the sheets! It's very annoying, but not as annoying as wired headphones choking me at night.
I don't know how to solve this.
To your question: Yes, I have experienced it very vividly a few times. Listening to a history podcast and then dreaming about vikings or whatever it was.
I listen to stories when going to bed and they'll play for a couple hours until my laptop dies. When I wake up from dreams, I usually find the dreams were inspired by the content of what I was listening to, or at least the people talking to me in my dreams are saying the story or things from the story. I worry how this affects my sleep quality but I have trouble with sleep in general and listening to a story is the most surefire way to put me out.
If you worry that it may affect sleep quality, it may be possible to have the story on a timer, for example to shut off after an hour. I think Audible has such a feature.
Data from Roche's next-generation anti-amyloid program. Today -- only biomarker data. Two Phase 3s in early AD initiating this year. And a planned pre-symptomatic Phase 3 study.
Spencer Greenberg and his team (Nikola Erceg and Beleń Cobeta) empirically tested whether forty common claims about IQ stand up to falsification. Fascinating results. No spoilers!
I had some questions about the methodology, and Greenberg responded. There were 62 possible tasks in the test. The tasks were randomized, and, on average, each participant only completed 6 or 7 tasks out of the 62 possible tasks. Since different tasks tested different aspects of intelligence, I wondered if it was a fair comparison. Greenberg responded...
> Doing all 62 tasks would take an extremely long time; hence, we used random sampling. A key claim about IQ is that it can be calculated using ANY diverse set of intelligence tasks, so it shouldn't matter which tasks a person got, in theory. And, indeed, we found that to be the case. You can read more about how accurate we estimate our IQ measure to be in the full report.
They even reproduced the Dunning-Kruger Effect — except perhaps the DKE isn't as clearcut as D-K claimed (see their discussion of their D-K results)...
> 1. Is IQ normally distributed (i.e., is it really a "bell curve")?
> In our sample, IQ was normally distributed, which agrees with prior studies.
Could anyone please explain me how this is not a circular argument, given that IQ is *defined* using the bell curve? So you define a value by assuming the bell curve, and then - surprise, surprise - your values turn out to be on the bell curve.
I guess the answer depends on how irregular the raw score distribution is—i.e., whether they display a heavy skew or a sharp kurtosis. The normalization process turns those irregularities into a Bell Curve. So, yes, psychometricians are the ones creating or forcing this bell curve. Worse yet, they keep normalizing to a median of 100 with an SD of 15, and this hides changes in population performance over time.
Is this a valid statistical operation? I never heard even the most extreme IQ-deniers argue against it. I was educationally inculcated to believe that normalization is necessary for valid statistical comparisons. Now you raise the question of whether I've been deluded all my life. Curse you, Viliam, for creating doubt in me! ;-)
> I guess the answer depends on how irregular the raw score distribution is—i.e., whether they display a heavy skew or a sharp kurtosis. The normalization process turns those irregularities into a Bell Curve. So, yes, psychometricians are the ones creating or forcing this bell curve.
I think a normalization process of "we calculate the quantile within the population histogram, then map that to the value on our Gaussian which has an identical quantile" would be a terrible process and anyone involved with it would go to science hell.
My impression was that they were taking the raw population histogram, and then use a first order polynomial (m*x+c) to map their raw test scores so that the mean is 100 and the SD is 15. Using this approach, a bimodal distribution would still remain bimodal.
However, WP suggests that you are correct:
> For modern IQ tests, the raw score is _transformed to a normal distribution_ with mean 100 and standard deviation 15. (my emphasis)
Holy shit, why would anyone do that? If you want to represent a quantile, just use a quantile. I mean, pediatricians can do that. "Your kid's size is in the 83rd percentile of their age cohort", not "Your kids size quotient is 115".
The only other group I am aware of which abuse the poor normal distribution similarly are physicists who use erfcinv to convert p-values into sigmas. At least they have the excuse that 5 sigma is a very unwieldy p-value.
Any professor who applied this "transform to gaussian" trick on his students test result would be fired on the spot, hopefully. Why do we let intelligence researchers get away with that?
Instead of arguing about too little or too much HBD, we should point out that it does not matter because all the souls of researchers who do such things as "normalize so that it is Gaussian" belong to the science devil anyhow.
I feel like you're holding back, quiet_NaN. Tell us how you *really* feel. LoL!
But I learned in my Stat 101 course (way back in prehistory when we were drawing Bell Curves on cave walls), that normalizing to a Gaussian distribution is SSOP (Standard Statistical Operating Procedure). Every psychometrician does this, and that's what the folks in this study seem to be doing. When you measure individuals against a large population, that's what you do to map them along the curve of the distribution. My problem with this is that it hides the jagged warts in the original data, and it hides the changes in population performance over time. Thoughts?
It was irresponsible of me to start this thread and then disappear to an offline vacation. I'm back now! And the thing that you remember from your Stat 101 course seems like the same thing I remember from my Stat 101 and Psychometrics courses, so I am quite surprised by the opposition.
As I understand it, the problem is that different variables have different nature. For example, if you want to evaluate statistically what color eyes people have, you could encode the data using numbers, for example "brown = 1, green = 2, blue = 3", but it would be invalid to do any mathematical operations on these numbers (e.g. assuming that green is the average between brown and blue). This is enumeration, without comparison.
One step further is comparison without scale. For example, you could encode the ripeness of a fruit, using "unripe = 1, ripe = 2, rotten = 3". Now we see that in some meaningful sense, the rotten fruit is further along the dimension we care about than the ripe fruit, etc. The ordering is correct. But the exact values 1, 2, 3 are arbitrary; numbering 11, 12, 13, or even 11, 12, 19 would work exactly the same.
And finally there is the type of variable where you have the zero and multiplication, and you can do mathematical operations, such as height and weight, because it makes sense to say things such as "these five people together weigh 421 kg".
.
My understanding is that in the old paradigm "mental age divided by physical age", intelligence was of the third kind. Physical age is a number that can be measured precisely. Mental age... as long as we stay somewhere between "a typical 3 years old" and "a typical 20 years old" is also more or less a precise number. So in this paradigm we can treat intelligence as an exactly measurable value, and discuss whether it fits the bell curve or whether the curve is skewed.
But we get the problem when we move beyond childhood (e.g. a typical 100 years old is probably *less* mentally capable than a typical 30 years old), or beyond the human norm (there is no such X that a typical X years old human is as smart as 30 years old Einstein). So we abandon the old paradigm, and switch to "intelligence as a percentile, controlled by age and whatever else".
And I believe that after this change of definition, intelligence is no longer a value that can be scaled, only compared. (That is, it is less like weight of a fruit, and more like ripeness of a fruit.) And talking about the shape of the intelligence curve no longer makes sense -- this is the kind of value that does not have a shape, only ordering.
Inb4: "but what if the raw scores are skewed?" But we don't care about the raw scores per se; that's precisely why we calibrate the tests so that we can convert the raw scores (regardless of their shape) into IQ points. Whether the raw scores are skewed or not is a fact about the questions in the IQ test, not necessarily a fact about human intelligence itself. A different set of questions would result in a differently skewed raw IQ curve.
.
As a thought experiment, imagine that there are only two intelligent being ever, let's call them Adam and Eve. Suppose that "Adam is smarter than Eve". What does it mean? It means that most problems that Eve can solve, Adam can solve, too; but there are many problems that Adam can solve and Eve cannot. (There are also a few problems that Eve can solve and Adam cannot, but both agree that such problems are rare, and seem often related to Eve's specific talents, rather than being general abstract problems.)
Okay, so Adam is smarter. But is he 2x smarter? Or only 1.1x smarter? What would those things even mean? If we create a test consisting only of easy questions, both Adam and Even will get maximum points, both equally. If we create a test consisting only of those questions that Adam can solve but Eve can not, Adam will score infinitely more than Eve. And a mixture of easy and difficult questions will result in any ratio in between. What is the fair mixture of question, that would result in a fair test? That sounds like a circular question. If God creates an arbitrary scale for intelligence, you can compose the test so that the results will correspond to this scale; but testing only Adam and Eve cannot determine such scale.
And if this is true for 2 humans, it is similarly true for 8 billion humans.
Shame on you, Viliam, for stirring the pot and then running off! :-)
In your absence, I discovered that I had misconceptions about how IQ test results were normalized. Generally, linear normalizations are used (Thank you, Eremolalos, for patiently correcting me). But ChatGPT was perfectly willing to feed me bullshit about how non-linear quantile methods are used to "force" a bell curve. But psychometricians design the tests so the questions vary in difficulty, and, across a random sample of test-takers, the results will tend to fall into a bell curve, from which psychometricians derive a number they call the g factor (expressed as z-scores).
So we're left with the question of what is g in the g factor? Psychometricians claim it's a measure of general intelligence, because the ability to perform well answering one category of questions will tend to perform well on other categories of questions. And g correlates well with standardized test performance.
But if we try to pin it down, g is an abstract concept. Psychometricians assume it's real, but to me they sound like medieval scholastics discussing the soul. IMO, it *does sound like circular reasoning: a person's g is how well they can take the tests we designed to measure g...
...which is why, beyond high school, g seems to have little or no effect on life outcomes.
Wait a minute. Expressing scores as standard deviations from the mean does not turn all results into bell curves. If the raw score results are bimodal they stay that way. If they are skewed to the left or right they stay that way. When the source Beowulf quoted said the raw score was turned into a normal distribution, I think the writer just meant raw scores were turned into z scores. If what you get after doing that is a bell curve, it’s because the raw scores already formed a bell curve.
I think that the point is that the scores on the test are not directly related to IQ. The idea is to shoehorn test scores into IQ. For instance, suppose that the questions increase exponentially in difficulty. Then perhaps each point in IQ equates to one more question solved. So we are linear on IQ to test score but we have decided that IQ is normal and so map that way. Except that we don't have any idea how questions map to IQ. So, instead, we assume a normal distribution and make some guesses about question difficulty and try to fit the scores to the distribution.
I must admit I'm beginning to question what I thought I knew about the normalization of IQs. But my understanding is that by assigning each of the raw scores to a quantile value in the sample, we're talking about mapping them into a Gaussian target distribution (with mean = 100 and SD = 15). And by doing this, it would compensate (hide) any skew or kurtosis in the raw scores. Am I wrong about this? Maybe I am, but I admit I'm too lazy to try to construct a skewed and or kurtosisized dataset to see what Statcrunch will show me after I normalize the data. Uggghhhh.
I have to ask. I took some of their surveys that are supposed to tell you things and they came across as pure voodoo to me. They were asking questions that were leading or ambiguous and then claiming to draw concrete conclusions from them. Are they supposed to be trustworthy?
I'm skeptical about the way the sample was obtained though, you're preferentially sampling for very online people who are time rich and money poor, or something like that.
They did say that the "non-Positly social media sample had on average substantially higher IQ estimates than Positly sample (IQ = 120.65 vs. IQ = 100.35)."
OTOH, once normalized, they fell into a nice bell curve. Hard to argue that this sample deviates from the general population by more than D = 0.019 and p = 0.53, as they noted...
> The distribution looks pretty bell-curved, i.e. normally distributed. However, to test this formally, we conducted the Kolomogorov-Smirnov test, which is a statistical test that tests whether the distribution statistically significantly deviates from normal. The test was non-significant (D = 0.019, p = 0.53), meaning that the difference between a normal distribution and the actual IQ distribution we measured in our sample is not statistically significant.
They talk about the possibility of range restriction for the lack of correlation between IQ and college GPA, but it seems plausible that smarter people tend to get into more rigorous colleges and choose more difficult majors. I did well in high school but then managed a 2.5 GPA at a college that I have no idea how I got in to.
Yeah, I suspect there's a huge effect on your school and major. If you test poorly, you probably have lousy SAT/ACT scores and so probably go to a less competitive college, and some majors are famously really hard if you're not pretty smart (math, physics, philosophy, engineering, chemistry, etc.).
A related stat I learned in grad school: grad school grades do not predict any measures of professional success, including number of papers published.
Also, I don't think more prestigious colleges are necessarily harder to get a high GPA at. They may be easiier. (I'm not sure of that though -- just an impression I formed after reading about grade inflation at Harvard. )
I worked hard to achieve a high GPA in high school to gain admission to a good university. Once in college, I slacked off a bit, had a lot of fun, did a lot of drugs (especially psychedelics), but maintained a B+ average. No one asked for my GPA in my job interviews after college. They just wanted a person with a degree. I knew this upfront, so why bother killing myself? Too bad g doesn't measure sensible life goals. I've always been a Heinlein's too-lazy-to-fail sort of guy.
I just took their test, and it produced results in line with what I’ve scored on other tests, including in the distribution of scores for different categories in line with my SAT, LSAT, and my own personal recognition of strengths and weaknesses.
Do you have an explanation that being anti-HBD-IQ isn't circular reasoning where poor outcomes are explained by discrimination and evidence of said discrimination are poor outcomes?
Psychological sadism? Or was it the necessary sociopathic focus to gain and maintain power? Stalin had tremendous self-control. After Lenin died and various factions were fighting over the future of the Soviet Union, Stalin was confronted by an angry Trotskyite officer with a saber. The fellow confronted Stalin in a stairwell and threatened to cut off Stalin's ears. Though visibly angry, Stalin maintained perfect control. He didn't flinch. He didn't say anything. Witnesses say he just stared at the officer while the guy blew off steam. Once Trotsky was expelled from the Party, Stalin was regarded by other party members as the least objectionable choice as their future leader. He was personable, had a self-depreciating sense of humor, and he never shouted or lost his temper. He seemed to be the safe choice. It wasn't until Stalin got full control of the security forces that he systematically purged (liquidated) anyone who could threaten his power. Most of his early supporters ended up being executed. But Stalin was noted for his emotional control. He didn't explode into screaming tirades like Hitler did. He just squashed his enemies like bugs with no emotion. Khrushchev claimed he was dazed and stunned when he learned of the size of Hitler's invasion. Stalin retreated to his dacha and was incommunicado for two days. But I wonder if Khrushchev misread him. I wonder if Stalin needed the quiet to think out his next moves to (a) make sure he wasn't deposed as leader, and (b) create a response to the German attack.
Putin used the same strategy to gain control of the Russian Federation. He was noted for his self-deprecating sense of humor. He inspired trust in the oligarchs and other politicians. But after he solidified his power, the oligarchs and rival politicians started falling out of windows.
I beg to differ. My statement *is* factually true. Some oligarchs may be living happy lives in exile, but many died under mysterious circumstances. I admit I'm too lazy to search through old news stories of oligarchic deaths, so here's what Chatgpt sez...
------------------
Determining the exact number of Russian oligarchs—or elite business figures—who have died under mysterious circumstances since Vladimir Putin assumed power (first as president in 2000) is difficult due to varying definitions of “oligarch,” the opaque nature of many incidents, and limited independent verification. However:
🕵️♂️ Scope and Estimates
During Putin’s tenure (since 2000)
Broad reports note dozens of high‑profile Russians (including businessmen, officials, critics, journalists) who died in unexplained ways—suspected poisonings, falls, plane crashes, and more. For example, around 20 mysterious deaths between 2022–2024 among elites alone were documented
One analysis cites “some two dozen notable Russians” dying in unusual ways in 2022 alone—a mix of suicides, falls, and other odd circumstances among energy sector elites
The Atlantic
DW
.
Russian oligarch-specific cases
In 2022, at least seven oligarchs died in quick succession, many linked to gas and oil companies—cases like Protosenya, Avayev, Subbotin, Melnikov, Antonov—and often described as murder‑suicides or staged suicides .
Energy executives such as Ravil Maganov (Lukoil chairman) and Andrei Badalov (Transneft vice-president) died from falls in similar circumstances in late 2022 and mid‑2025, respectively
The Kyiv Independent
The Sun
The Independent
.
🔢 Summary Estimate
Timeframe Estimated number of suspicious elite deaths Context
2022 alone ~20–24 people “Sudden Russian death syndrome” among officials, oligarchs, energy execs
Oligarchs only 7+ in 2022 Energy‑sector elites with alleged murder‑suicides
2000–2025 Dozens total Includes critics, officials, oligarchs
🚨 Notable Examples
Yevgeny Prigozhin (Wagner Group head, former ally): died in a plane crash in August 2023 under suspicious circumstances, following his short-lived mutiny against the Kremlin
RadioFreeEurope/RadioLiberty
Wikipedia
The Independent
.
Ravil Maganov (Lukoil chairman): fell from hospital window in 2022—officially suicide, but widely questioned .
Andrei Badalov (Transneft vice-president): died after falling from apartment in Moscow in mid‑July 2025, raising fresh alarms
Good question. Also they included sadism in the Dark Triad, but it's not part of the triad — Dark Triad + Sadism = Dark Tetrad. I'm sure there are plenty of personality tests that measure this stuff, though. (And they're probably as useful as the Myer Briggs or the Enneagram! <snarkasm>)
Digits backwards correlates moderately well with full scale IQ. And I think it makes sense as a measure of one aspect of intelligence -- being able to hold a a number of details at once in your mind so you can extract what conclusions you can from the whole welter. It's not just useful for mental math. It's something you might use, for instance, if solving a puzzle with a several little rules to it -- there are a bunch of cubes each with sides of different colors arranged as follows, and you have stack the cubes in such a way that . . .
There could be a job where you have to engineer a solution to a problem like that. Or a situation involving multiple regulations regarding international trade. Obviously being able to hold a bunch of details in mind at once is only one skill used for tasks like that, but it doesn't seem peripheral or trivial to me.
I am not sure I fully understand your objection. Are you objecting that certain subtests are too correlated with one another, that they are uncorrelated with g, or both? Is this a single group of subtests or multiple groups?
My experience taking an IQ test was during an evaluation for ADHD. In that case, the fact that I scored worse on certain subtests despite their usual correlation with the others was interesting and helpful.
In general my impression of psychometricians is that, whatever their flaws may be, they are unusually willing to be politically incorrect and to upset the academic apple cart, and I respect that.
But G is probably not one *thing.* I think it's probably something like, say, athleticism. There are "subtests" for aspects athleticism -- strength, speed, eye-hand coordination, flexibility, speed of learning routines, etc etc. You can test them, and they are probably pretty well correlated, but you can't test athleticism itself. Even if you tried to find the root cause of athleticism you would not find anything a monolithic Something. There would be a genetic component, but then general health would also be a component (if you get the gene but also have bad asthma you're probably not going to the Olympics.) Early training or play that develops things like eye-hand coordination is probably a contributor as well.
Another analogy: Maybe g is like building tallness. NYC has a high BTQ (building tallness quotient). But not all buildings are tall. And you can't find the tallness itself, separate from the buildings.
>You can test them, and they are probably pretty well correlated, but you can't test athleticism itself.
Sure you can. A popular test for low athleticism is : get up from sitting on the ground without using your hands. That seems to be exactly what Tori is asking for: A problem in that particular area can usually be compensated by enough general athleticism.
And g is definitely not like building tallness. You cant linear combine buildings.
Ah, it sounds like you have a disagreement with the principle used to structure the tests. They want a lot of subtests measuring different things, each correlated with g, which requires each of the subtests to be simpler. More complex tests will naturally overlap more -- in addition to being harder to score, as you said.
Without digging out the report, I think I did worse on the backward or distracted versions of some tests than my performance on the forward or undistracted versions would lead you to expect. And there is an infernal digit circling test where I got reasonable accuracy at the cost of being painfully slow; the subjective experience of doing that one was viscerally unpleasant in a way that is difficult to describe.
I really like it, even if the old parchment-esque site felt like one of the last vestiges of the old Internet and I am sorry to lose it. Are there web design sedevacantists, arguing that the Vatican hasn't had a legitimate webmaster in twenty years? There ought to be.
I haven't really dug into the site yet, but I hope prompt English translations imply that they also took the time to reorganize the deeper structure of the site. That was pretty badly needed.
But not everything, it had a habit of giving English-language reports with links and then the linked material was in Italian because pffft, why can't you speak Italian if you're looking up Vatican stuff?
Could anyone give me a realistic path to superintelligence?
I'm a bit of an AI-skeptic, and I would love to have my views contradicted. Here is why I believe superintelligence is still very far away:
To beat humans at most economically useful tasks, an AI would have to either:
1. have seen most economically meaningful problems and their solutions. It would not need a very big interpolation ability in this case, because the resolution of the training data would be good enough.
2. have seen a lot of economically meaningful problems & solutions, and inferred the general rules of the world. Or have been trained on something completely different, and being able to master economically useful jobs because of some emergent properties.
1. is not possible I think, as a lot of economic value (more and more, actually) comes from handling unseen, undocumented and complex tasks.
So, we're left with 2.
Great progress has been made just by trying to predict the next token, as this task is perfect for enabling emergent behavior:
- Simple (you have trillions of low-cost training examples)
- Powerful: a next token predictor having a zero loss on a complex validation text dataset is obviously superintelligent.
Even with a simple Cross-Entropy loss and despite the poor interpolation ability of LLMs, the incredible resolution of the training data allows for impressive real-world results.
Now, it's still economically useless at the moment. The task being automated are mostly useless (I work as a software engineer and I think my job is at best unproductive in the grand scheme of things, and more probably nefarious to economic growth).
Scaling things up doesn't work: GPT-3 -> GPT-4 yielded a great performance leap, but GPT-4 -> GPT 4.5 not so much, despite the compute factor being the same at each point. So scaling laws are worst than logarithmic, which is awful (not just bad).
I can’t think of another powerful but simple task that AI could be trained upon. Writing has been optimized by humans to be the most compressed form of communication. You could train an AI to predict the next frame of a video, but it’s soooo much noisier! And the loss function is a lot more complicated to craft to ellicit intelligent behavior (MSE would obviously suck).
So now, we're back to RL. It kind of works, but I'm surprised by how difficult it seems to implement, even on verifiable problems.
Code either passes tests or not. Still, you have to craft a great advantage function to make the RL process effective. If you don't, you get a gemini 2.5 that spits out comments and try/catch blocks everywhere. It's even less useful than gpt 3.5 for coding.
So, still keeping the focus on code, you, as a human, need to specify what great code is, and implement an advantage function that reflects it. The thing is, you'd need an advantage function more fine grained than what could fit in a deterministic expression.
Basically, you need to do RLHF on code. Which is costly and scales not with compute, but with human time. Because, sure, you can RLHF hard, but if you have only few human-certified examples, you’ll get a RL-ed model that games the reward model.
The thing is, having a great reward model is REALLY HARD for real-world tasks. It’s not something you can get just by scaling compute.
Last year, the best counter-argument to my comment would have been “AI progress is so fast, do you really expect it to slow?”, and it would have been perfect. Now, I don’t think we have got any real progress from GPT-4 on economically valuable tasks, so this argument doesn’t hold.
Another convincing argument is that “we know the compute power of a human brain, and we know that it’s less than the current biggest GPU clusters, so why should we expect human intelligence to remain superior?”. That’s a really good argument, but it fails to account for the incredible amount of compute natural selection has put into designing the optimal reward functions (sentiment, emotions) that shorten the feedback loop of human learning and the sensors that give us data. It’s difficult to quantify precisely but I don’t think the biggest clusters are even close to that. Not that we’re the optimal solution to the intelligence problem, just that we’re still way short of artificial compute to compete against natural selection.
I think most of the people who believe in superintelligence believe that it is just the next step after general intelligence. There is supposed to be some sort of general flexibility in reasoning and problem solving that lets you deal with all sorts of problems, not just the ones you’ve been optimized for. If that’s right, then you don’t need to train on everything - you just need to train on enough stuff to get that general intelligence, and then start doing a bit better.
But I’m skeptical that there is any truly general intelligence of this sort - I think there are inevitable tradeoffs between being better at some sorts of problems in some environments, and other problems/environments. (Often enough, I think the tradeoffs will be with the same problems in different environments.)
"There is supposed to be some sort of general flexibility in reasoning and problem solving that lets you deal with all sorts of problems, not just the ones you’ve been optimized for. "
When you say "all sorts of problems," what reference set are you drawing from? Just the sort of problems that some human somewhere on Earth could solve (or at least try to solve)? Or vastly more complex or bizarre problems that we haven't studied and perhaps are unable to study?
If it's the former, it seems hard to credit the idea that a generally flexible intelligence *couldn't* exist. Human brains exist. Different human brains are good at different sets of tasks, but we have sharp physical limitations on their power consumption and size/density. Even with only human brains, you could craft a pretty good facsimile of a generally flexible intelligence in the form of a team of people with different specialties--as long as you didn't mind a bit of extra latency while they figured out who was best-suited to tackle it.
It likewise seems hard to credit that a manufactured intelligence could never be as a single capable as a highly-capable human[1] simply because human brains are made of physics-obeying stuff, doing entirely physics-obeying things and it would be really, really weird if growing a squishy, low-speed meat computer were the only physically allowed way to do all of those things. So it seems like "AI as capable as highly-capable humans" is very likely to be possible.
But once you *had* such an AI, it seems like you could surpass human level capabilities with almost trivial extra effort. Things that seem likely to be easily possible with a constructed intelligence (that aren't easily possible with humans) include:
1. Allocating near-arbitrary amounts of lossless information storage with near-instant retrieval.
2. Increasing the speed at which it "thinks" up to whatever limitations are imposed by your hardware.
3. Enabling it to create multiple instances of itself, to give full focus to many different tasks at once.
4. Networking it together with other intelligence with different capabilities, allowing for team like collaboration with (potentially) much lower-latency and higher-fidelity communication than humans can achieve with speech and writing.
Now, it's quite possible that some of those things would actually be infeasible: the trouble with talking about an unknown technology is that you don't know its limits. But if even some of them were feasible, you'd end up with a human-level intellect with access to some degree of superhuman (perhaps enormously so) capabilities. Different people place different bars for "superintelligence" and so it's possible that even all of that together wouldn't pass some people's bars. But to me, at least, that potential capability set seems pretty damn alarming.
[1] Which is not to say that it will necessarily happen soon: I'm uncertain but leaning slightly pessimistic about the ability of LLM-style architectures to get arbitrarily good at approximating human capabilities. If they can't, the next key breakthrough could well be many years off.
What we've seen with every form of intelligence we've observed, whether it's humans or machines or animals, is that each of them has weaknesses compared to others. Some of them have lots of strengths over others - humans can outperform bobcats at lots and lots of things, even if bobcats outperform us at finding small animals in the woods. And even looking at other humans, we find that the ones that are especially skilled at some things have weird deficits in others, whether it's absent-minded professors, or Elon Musk believing whatever weird conspiracy theory he read on some tweets at 4 am.
If there were such a thing as an intelligence that really could do *all* sorts of problems, which I think the idea of AGI is actually about, it would actually be better than any actually existing intelligence at lots of things - particularly those complex and bizarre problems we haven't studied and may not be able to study.
But I don't think that sort of thing is actually possible. Instead, we'll get machines that are better than humans at more and more things, but there will still be some things they remain weirdly bad at compared to us, just as humans are weirdly bad at finding small animals in the forest compared to bobcats. It may well be that there are several classes of AI, each of which has different sorts of weaknesses compared to humans.
Meant to reply to this earlier, but a detailed reply will take a while. So short answer: I think you're conflating "long-term cognitive capability" and "learned specialization."
For example, you say "bobcats outperform us at finding small animals in the woods." Now, I expect that a bobcat picked at random will outperform Linda the investment banker from Chicago at this task[1]. But I suspect that a round majority of humans could outperform a bobcat here--though how and what you measure makes a big difference--if started learning the relevant skills from an early age. Quite a few could probably re-train into them as an adult in a matter of a few years.
While I don't doubt different people have different innate[2] strengths and weaknesses towards different sorts of cognitive task, in practice we specialize so much--and from such an early age--that it's hard to tell nature from nurture. Regarding AIs: it's certainly possible for different architectures to be more or less well-suited to certain sorts of tasks. But there's a degree of flexibility and extensibility there than changes the game. If a single architecture *can* be good a Tasks A, B and C, but each require different training, well, why not just train it on all three. Even if they actually need different sets of weights and biases[3], you can just train 3 instances and then network them together into a single agent. And you could likely do that almost as well across architectures. To really be confident that humans *always* retain a niche, you'd need to identify things that *no* computer-based algorithm could compete with a human brain at.
[1] Though of course the bobcat is probably absolute rubbish at maintaining a valuable stock portfolio.
[2] Though this is a slippery word: genes are certainly "innate," but a lot of early environmental factors are barely less so. There's a very fuzzy edge around which environmental factors aren't.
[3] or whatever architectural equivalent future AI paradigms will use.
My main disagreement is with the last paragraph. I agree that we don’t have anywhere near enough compute to simulate natural selection and find better reward functions. But I also think that reward functions that result in superintelligence are not too complex. I don’t know how to explain why I believe this, it comes largely from intuition. But I think given the assumption “reward functions for superintelligence are simple”, you can reasonably get that superintelligence will be developed soon, given the hundreds of researchers currently working on the problem.
> Scaling things up doesn't work: GPT-3 -> GPT-4 yielded a great performance leap, but GPT-4 -> GPT 4.5 not so much, despite the compute factor being the same at each point. So scaling laws are worst than logarithmic, which is awful
I'm not sure you can back this up. If doubling the compute doesn't double the performance, that's worse than linear. You're trying to show each doubling in compute doesn't even give the same constant increase on some metric of performance, and that metric would have to be linear with respect to the outcome you're trying to measure. I'm not sure we have such a metric, and some metrics, like AI vs human task duration, appear to be increasing exponentially.
> I work as a software engineer and I think my job is at best unproductive in the grand scheme of things, and more probably nefarious to economic growth
Well, I feel that most of the software I've developed (mainly ML models and ERP software) has been used to help with problems whose solutions were human.
2 examples:
- Some features of the ERP software I've helped develop were related to rights management, and paperwork assistance. For the first feature, the real consequence is that you keep an employee out of a some part of the business, effectively telling him "stay in your row", which is not good for personal engagement. The second is more pervasive: when you help people generate more reports, you are basically allowing middle managers and law makers to ask for more of them. So you end up with incredibly long contracts, tedious forms and so on. Contracts were shorter when people had to type them and copy-pasting didn't exist.
- I've developed a complex ML model for estimating data that people could just have asked to other people. When I discovered that, I told the customer: "you know, you could just ask these guys, they have the real numbers". But I guess they won't, because they now have a good enough estimate: net loss.
Now, of course, I've developed useful things, but I just can't think of any right now ^^
I would be careful to read too much into that graph without doing some more careful statistical analyses. There’s a plausible enough picture in which the left 60% of the graph should see zero effect, and the right 40% should see a roughly linear effect, and if I squint it actually looks compatible with that. But also, 2.5 years is just a really short time frame, and there have been some much bigger short term effects in some industries with the presidential transition.
That's not consistent with the recent rise in unemployment for CS grads. I've heard too much anecdotal data to suggest that it's not related to AI. I wouldn't expect AI to have impacted other industries yet. It's too new. Only software companies are agile and tech-savvy enough to adjust to new technology so quickly.
Cost-saving innovations tend to roll out during recessions. I expect AI to really surface during the next one.
It's perfectly consistent, there's even too much software out there so there are hiring freezes. And interest rates are still much higher than in pre-covid era. We haven't seen slowdown of employment in professions that according to economists are most susceptible to AI-induced job loss, but we have seen slowdown of employment in professions most susceptible to economic downturns. The slowdown is not only in software, but in real engineering too - perfectly conssitent with firms cutting R&D budgets.
My wife and I are considering making a large change: we both grew up and live in the Mountain West, got married, and had children, who are now on the verge on making the transition to junior high school\middle school. We like where we live now but don't *love* it, and don't have extensive social ties here we'd be sad to leave.
My parents, and sister and her family, live on the East Coast, in the place we would normally not consider moving to, but as time passes, we've come to appreciate how much we've missed being so far from family, and are considering relocating to be closer to them. My parents are in general good health, so barring unforeseen events we expect to have years of quality time to spend.
What are the main concerns I should think through, aside from the usual cost of living and school quality issues?
One thing you may not have considered is the humidity. I live in the DC area, and my wife (who grew up in Utah) still finds the humidity here during the summer terrible after 20 years. We have two dehumidifiers running in our house!
ETA: Visit wherever you're considering moving in the summer--now through mid-to-late August, say. That will give you more of a taste of what you're getting into.
Oh for sure, we have visited the area in both the dead of winter and the height of humid summer. My hope would be that the proximity to both family and cultural amenities would override the discomfort of the weather
After having grown up in the mountain west, moving to the east coast for 12 years, and having moved back to the mountain west…
Summers are painful when you have to tolerate them year after year on the east coast. The humidity and banality of the weather sucks. No more 40 degree temp swings between days and nights, or between days. No more snow, and when it does snow it’s an apocalypse.
Same with traffic when you have to tolerate it every single day. There are people everywhere on the east coast… it’s impossible to escape.
You’ll miss open landscapes. I’m convinced being used to seeing a big sky and far-reaching distances, then suddenly not, is akin to seasonal affective disorder. It does make trips back out west magical though.
If outdoor recreation is your thing, it’s worse on the east coast. It can still be done, but it’s less beautiful, less available, and more crowded.
If you have 100 kids, on average they will likely grow up with less “masculine” traits on the east coast. This has both good and bad attached to it; just beware. The cultures are indeed different.
Overall there are plenty of goods and bads… I moved back to the mountain west for my community and the views. If those weren’t important to me (or if I had community elsewhere) I may not have made the move back. Yet still sometimes I’m struck by the annoying aspects of hyper-masculine culture here (exaggerated because I do blue-collar work), just as I was struck on the east coast by the annoying aspects of hyper-feminine culture.
One last note… when I was in 7th grade my parents almost moved us to another state. I was onboard with the plan, but it ended up not happening. That move *not happening* was one of the luckiest moments of my life—unbeknownst to me at the time—because having grown up in one area my entire adolescence gave me friends and a community that will be with me forever. I have a true “home” moreso than my parents ever did.
I always felt the huge landscapes in the West helped (me, at least) keep things in perspective. Everything human scale is dwarfed by the surroundings. My parents moved us to Utah in 7th grade and it turned out to be the best thing that happened to me.
I work from home so commuting wouldn't be a major daily annoyance, and I don't have a strong community network here in CO where I currently live, but also don't really anticipate building a strong community in the NE outside of my family. Interesting point about the masculine\feminine culture; I get somewhat tired of the blue collar masculine cultural aesthetic mainly because I'm not part of that demographic, but I'm also tired of the people who make "being outdoorsy" their entire personality. Anyway, lot to think about; I appreciate the discussion
Which part of the East Coast? Massachusetts is very different from Maryland.
I also grew up in the Mountain West and lived in the East Coast for a time as a child. Overall the mountains offer a better quality of life: they're less crowded, cheaper, generally cleaner, and in every way healthier.
The biggest advantage of East Coast life is proximity to America's great cultural institutions. If you live in the NE megalopolis, you are more plugged in to world culture than the great majority of humans. Since it's more densely populated you also benefit more from network effects. Your family is even an example of this.
As with so many things in life it comes down to values. I'd say if you care more about people, move to the coast. If you care more about nature or lifestyle, stay in the Mountain West.
I didn't want to get too specific, in part not to bias responses too much, but the locations we're talking about definitely matter. We're from Salt Lake City (not Mormon), and now live outside Boulder, CO. My parents and sister's family are in suburban New Jersey outside Philadelphia. We love the cultural access in the NE, but the crowding and humidity are the big turn-offs for me. We spent 5 years in Austin so we're familiar with scorching heat\humidity and don't enjoy it. If there were any way to arrange matters such that we all lived in the West that would be ideal but that's not a viable option.
We visit Utah more-or-less every year, and one thing that's striking is how different the assumptions wrt family size are. Our family of five is a little too big for a lot of stuff elsewhere, and especially in the DC area--they can accomodate you but you're a bit of an exception. In Utah, we're a small family.
Totally understand this point; Utah is a great place for large families. Somewhat relatedly, the area we live in now is mostly upper-middle class striver types with both parents working; in our family I'm the sole breadwinner and my wife stays home and we can definitely sense the mixture of resentment and disdain from some people here. Being in an area with a larger diversity of acceptable life choices would be refreshing
It's a part of the country with a different culture, climate, and geography than I'm used to. I've enjoyed my many visits there, and within two or three hours drive there is a large array of things to do and places to see, but the place we'd be moving is itself not a big draw.
I'm pretty far behind you as my wife and I just had our first child in January, so while I can't answer your question, I can say that even these first six months (and the year and a half of marriage before having a kid) have been a time of rich fullness just due to the fact that my wife's family and my parents all live close by. Our location doesn't account for much of that, as I live smack in the middle of North Dakota.
I'm sure that we would still be very much enjoying life together even if none of our family were close by, but having family around definitely adds an extra depth and richness that I feel would make a move like you're describing worth it.
I'm not even so sure about adding depth and richness, but it sure would be nice to have free babysitting.
OP's kids are a bit older, but I do regret having gone through the small kids phase with no family nearby; it takes a huge amount of pressure off when there's someone who can watch the kids sometimes and let both parents have a little break.
Thanks for replying. For me this is a choice between great climate and access to great natural beauty, or closeness to family and the ability to share our life in a more casual, regular way than guesting\hosting family for a week or more in their\your house. For years the choice was obvious.
Been having fun a lot of fun working with ChatGPT on alternate history scenario where the transistor was never invented- somehow, silicon (and germanium etc.) just doesn't transmit signals in this alternate timeline. It seems like humanity would have invented vacuum microelectronics instead? Maybe did more advanced work with memristors too? It would certainly be a different world- electronics would be incredibly difficult to miniaturize without the transistor, so you might have large centralized computers for the military & big corporations- but definitely no smartphones. If we had home computers they'd be much more expensive, so even if the Internet existed by 2025 it'd be much much smaller.
Without electronic capital markets you'd have a radically different 20th century- slower growth, more stable, less capital flows in and out of countries. This might've slowed China's growth, specifically- no ecommerce, less investment flows into China originally, no chance for them to hack & steal Western technology. Also a decent chance that the USSR's collapse might not have been as dramatic- they might've lost the Baltics and eastern Europe, but kept going otherwise. The US would probably be poorer without Silicon Valley, plus Wall Street would be smaller without electronic markets. Japan might really excel at the kind of precision mechanics & analog systems that dominate this world. So it'd be a more multipolar world overall.
(I searched the alternatehistory forums to see if anyone else had ever worked on this scenario, but found surprisingly little)
Sounds like a fun project. If you haven't seen it already, you may enjoy some of the summaries of pre-IC space station proposals here: https://projectrho.com/public_html/rocket/spacestations.php#atlasstation . No ICs *might* mean a much larger human presence in space. That plus an intact USSR could be very interesting.
Copying an AI summary from a query "cold field emission microelectronic vacuum tubes"
>Cold-field emission microelectronic vacuum tubes, or vacuum microelectronics, utilize the mechanism of electron emission into a vacuum from sharp, gated or ungated conductive or semiconductive structures, avoiding the need for thermionic cathodes that require heat. This technology aims to overcome the bulkiness of traditional vacuum tubes by fabricating micro-scale devices and offers potential applications in areas such as flat panel displays, high-frequency power sources, high-speed logic circuits, and sensors, especially in harsh environments where conventional electronics might fail
Admittedly these are still higher voltage and less dense devices than semiconductor FETs, but electronics would not have been limited to hot cathode bulky tubes even if silicon transistors never existed.
"It would certainly be a different world- electronics would be incredibly difficult to miniaturize without the transistor, so you might have large centralized computers for the military & big corporations- but definitely no smartphones. If we had home computers they'd be much more expensive, so even if the Internet existed by 2025 it'd be much much smaller."
Don't forget that you can probably have fairly large computer memories (in the context of vacuum tubes ...) because of core memory:
PDP-11s shipped with core memory and you can do QUITE A LOT with 1 MB (or less).
And you don't need transistors for hard drives, either :-)
Imagine "programs" being distributed on (error correcting encoded) microfiche.
Sounds like fun in a steam-punk way.
Also, you can easily imagine a slow internet. Think something like 1200 baud (or faster) between major centers (so very much like early Usenet). You won't spend resources for images or pretty formatting, but moving high value *data* should work.
About the time transistors were becoming widely used, micro-vacuum tubes were also in use. I don't know what their life was, and clearly transistors were found superior, but they were competitive in some applications.
So, yes, vacuum micro-electronics would have been developed. I've got doubts that memristors would have shown up any more quickly than they did here.
It's not clear that vacuum electronics couldn't have been developed to the same degree of integeration that transistors were, so I'm not sure the rest of your caveats hold up. They might. I know that vacuum electronics were more highly resistant to damage from radiation, so there might well have been a different path of development, but I see no reason to assume that personal computers, smart phones, routers, etc. wouldn't have been developed, though they might have been delayed a few years. (That we haven't developed the technology doesn't imply that it couldn't have been developed.)
It's also possible to miniaturize electromechanical switching to IC scale with MEMS and NEMS relays. It's a lot slower than transistors, which is why it's only used for specialty applications, but it's possible.
My husband was forced untimely - after being rear-ended by someone who spoke no English, had no proof of insurance on him, said he had insurance but didn’t know the name of the company, before driving away (the babies were crying, the side of a freeway is no place for a half dozen children)- and miraculously did have it (one time that the state doing its seeing-like-a-state thing was helpful) - into a quick round of unsatisfactory car shopping after All-State took its sweet time deciding to total his perfectly driveable old Subaru.
As a result - life having other distractions, and he having little interest in modern cars - he got steered into buying his first “new” car.
That’s something that won’t ever happen again!
All those new features he didn’t want to pay for … and Subaru doesn’t need to haggle, period.
He was set to get two requests, an ignition key and a manual, slam-shut gate, swapped in from a dealer in another city - but in the event, a buyer there was simultaneously grabbing that one, so the one they brought in was sadly keyless.
We should have just returned home (a hour plus away) but a certain amount of time had been invested, and a planned road trip was upcoming.
Question: should I get him one of those faraday cage thingies? It has been established that he won’t stuff the fob in foil every night, nor remember to disable it.
He didn’t even know about this car-stealing method, not being much online and certainly not on NextDoor.
There is no consensus on the internet about the need for this. Possibly already passe, superseded by new methods of thievery.
We live in a city that had 18,000 cars stolen last year. Not generally Subarus probably … but anyway. The car is within 50 or sixty feet of the fob, in an apartment parking lot, not within view.
Our cars, when we’ve occasionally, inadvertently left them unlocked (long habit from where we lived previously) have reliably been rifled through, though it was a wash: we had neither guns nor electronics nor drugs. Once, memorably, they stole his car manual. I recall thinking that they’d better come by around daylight savings time and change the clock for him.
A couple of strategies I use with my vulnerable Hyundai in a large city with many many many stolen cars:
1. Make the interior of your car look like a poor, lazy, low-class, possible drug addict is living in it. Leave an empty fast food cup or two in the cup holder, and some random receipts and leaves and other trash tossed around. The cabin visible through the windows should look unpleasantly chaotic.
The goal here is to make it look like it's *completely impossible* that there is *anything* in the car worth breaking a window for; you want your car to look like the person driving it couldn't possibly have any loose change or small bills in a compartment, because they would have already spent them, and no way are there any nice sunglasses or first aid kits or emergency cash or snow chains or changes of clothes or any of the kind of useful stuff I keep neatly organized in my trunk.
2. Consider installing an ignition kill switch. My car's dashboard lights will come on when I insert my key, but my engine won't turn over unless I engage the hidden button which my friend installed for me. While a smart thief could probably find said hidden button if they checked around a bit, I'm fairly confident it wouldn't occur to them to check around a bit, as a) ignition switches are incredibly rare and b) my car looks like it belongs to a drug ghoul who wouldn't know about ignition kill switches.
Thank you! This is not a problem for my car, which doesn’t have tinted windows and can easily be viewed from outside. I just don’t leave anything in it. It has one cool feature too. It’s so old that the interior lever to pop open in the trunk doesn’t work. The trunk is truly a lock box which is handy if you’re hiking or floating a river or something. I take my key. People can leave their stuff in my trunk. There’s no way to get in.
In general, my practice in my last city was to leave the car unlocked rather than have the window smashed unnecessarily.
Years ago, that city had so little crime that we would say if you left your car unlocked somebody might leave you a present in it.
Now I don’t like to have the stuff in the glove box thrown about. Makes you feel kind of bad so I lock it.
We shall see what happens with my spouse’s car. It’s in its beautiful new condition. In fact, we took a road trip and after we got back, I spent half a day restoring it to new car condition.
Usually, I’d be looking for the first dent to happen with relief, but in this case I want to keep it nice; I feel like it’s not a car he’ll want to keep forever.
I’ve never heard of this sort of faraday cage thing. How many cars have been stolen from the apartment parking lot in the last few years? Does insurance cover such thefts? My guess is that a precaution against this one method of theft isn’t that likely to make a big difference, particularly since theft is not that common anyway (apart from the weird Kia/Hyundai exploit that was discovered during the pandemic), but if the faraday cage is cheap and convenient and easy to set up in the tray where you put keys and wallet when you get home anyway (or however you do it), it could still be net worth it.
It was often mentioned in my former neighborhood where a car was stolen or at least ransacked virtually every night. These were much nicer cars and trucks than we then owned. It was a little mysterious. I just couldn’t picture all these vehicles being silently hotwired, and nobody having the slightest knowledge of it inside the house. That’s when people started talking about this other business with the key. A few admitted that they had left their key in the car or rather the fob.
It was a house and I parked my car in the garage anyway but most people used their garage for storage. My husband’s old Honda was in the driveway, but was not very attractive.
I just looked on a forum for people who are crazy about the make of car we just bought and the subject really didn’t come up.
Two cars were legit stolen from our complex in the last couple years, while another one was TikTok challenged and driven away to a shopping center nearby. I don’t know about this surrounding neighborhood because I didn’t get on next-door after we moved here.
The latter Kia or Hyundai was destroyed, however, between the broken windows and the damage to the steering column. It was my neighbor’s old vehicle and he was super sad though he bought a much nicer car.
I thought they said to keep the keys *inside*, but right next to the door, so that *if* someone breaks into your house, they don’t go ransacking the house and possibly turning violent. But a lot of performatively anti-woke people happily misinterpreted that as though they were saying to leave keys *outside* by the door.
Supposedly using a device to capture the signal pinging between the key fob and the vehicle. How you would start the vehicle thereafter away from the fob I don't know. Or maybe just as a means to open the vehicle and throw stuff from the glove box around.
I really thought this was a thing as it was so commonly referenced, but now I'm not sure if it was imaginary/dreamed up by people who didn't want to admit they left their fob sitting in the car.
A relay attack lets a thief extend the range of the key fob by retransmitting the signals, allowing them to start the car. It doesn't let them clone key fob. Once started, cars will not automatically shut off when the key goes out of range. Some cars have protection against relay attacks, but I think most do not. The thief would have to get close enough to the key fob to pick up the signal, and they need the key signal in real time. They can't record the signal and replay it later.
Yes, that’s what I meant. Didn’t mean they would randomly capture signals and store them for later use.
I never had a reason to think about it before. My own car is a very basic car from 2009.
I had just absorbed by osmosis this idea about newer cars.
But upon researching it, I couldn’t find that after all people seem particularly worried about it. Or any agreement about what’s going on with the key, whether it’s really talking to the car or sitting there inert.
Not sure if the subject is just really well understood only by those who steal cars and those who know a lot about electronics.
This is a very old attack in the cryotographic literature. IIRC, it was originally called the mafia fraud attack. Though really its kind of just an instance of a man in the middle attack.
A rather stupid syllogism, but I don’t own such a box. That’s what I’m trying to learn - if it’s worth buying one. For some reason I thought this would be an easy layup for this crowd.
Something interesting I learned today*: Among professional historians, antiquarians and the like there is a widespread consensus that Jesus of Nazareth was a real, historical person. Important disclaimer, this distinguishes the historical personage from any supernatural capabilities he may or may not have had.
They cite about half-a-dozen non-biblical references by Tacitus, Josephus, Pliny the Younger, Suetonius, Mara Bar-Serapion, Lucian and Talmudic references. Most of these are pretty brief or oblique but they converge on a pretty recognizable figure. The evidence is a lot stronger than he was a mythical creation, which is why mainstream scholars of all stripes have landed there.
> The other interesting thing about this, the scholarly consensus is a lot stronger than public perception
This seems unsurprising. If you asked scholars and Americans if Sudan existed, I am sure you would get similar answers.
Also, "there existed a historical person who resembles the person in the tale" is an excessively low bar to clear. Siddhartha Gautama likely existed. Jesus likely existed. Mohammed very likely existed. Gilgamesh likely existed. Alexander very likely existed. The Iliad could well be based on a historical conflict. Moses could well have existed.
The nicest argument I have heard for Jesus's historical existence is from the lack of denial: critics of Christianity in the early centuries AD made lots of attacks of almost every kind, but they apparently didn't claim that there never was any such person as Jesus. The point being that if there had been any doubt, they would have jumped on that attack for sure.
A year or two ago, I watched an extended interview with Richard Carrier, who's one of the highest profile people arguing against the historicity of Jesus. He's a classical historian by training and a pop historian and Atheism advocate by vocation. IIRC, his thesis is that Christianity started among ethnic Jews living in the Roman world and followed what was then a fairly common template of venerating a purely spiritual messianic figure, and a bit later St. Paul and the writers of the Gospels reinterpreted some allegorical stories about this messiah as referring to an actual historical person who had lived and preached a few decades earlier.
Carrier made some interesting arguments about the mythological pattern which I lack the expertise to assess in detail. Where I do think he rather badly misstepped was in making a big deal out of the Gospels and Epistles being written in Greek rather than Aramaic. I don't that needs much explaining given how few classical documents have survived to the present. Greek was a major literary language throughout the region while Aramaic was not, and Christianity caught on much, much more in Greek and Latin-speaking areas than in Aramaic-speaking areas, so only Greek foundational texts surviving isn't particularly surprising. The wikipedia article for "ancient text corpora" cites estimates for Carsten Peust (2000) that our text corpus from prior to 300 AD is 57 million words of Greek, 10 million words of Latin, and 100,000 words of Aramaic.
Where did you get the idea that Aramaic wasn't a significant language of the region at the time? It was the lingua franca from the Levant to Persia for centuries.
The Talmud alone is in the ballpark of 2.5 million words, most of it two dialects of Aramaic and most of the rest in Hebrew. While it was compiled later than 300 AD, it contained a body of work stretching over many centuries, stretching back well into the Second Temple period.
The Mishnah, compiled centuries earlier, was primarily Hebrew but with some Aramaic.
And that wikipedia page lists 300,000 words for Hebrew - the Tanakh has over 300k words, the Torah 80k of them.
All that is to say, even if we really do have fewer surviving words of Aramaic than Greek, that almost certainly has more to do with our sample than the ancient source.
I was only counting Aramaic, not Hebrew, and trying to time box it to Classical Antiquity. That wikipedia page lists 100k words for Aramaic and 300k for Hebrew. But even if you want to count both together on the grounds that they're closely related languages and also extend to time window to include the Talmud, that's still a lot smaller than the corpuses for Latin or Greek. That said, I am prepared to be told that the wikipedia page is wrong (either misinterpreting the source or relying on a bad source) and would be grateful if you could point me at a better one.
My impression was that Aramaic was a significant spoken language in the Middle East, but much less significant as a literary and documentary language in the broader Mediterranean world than Greek or Latin. So someone writing for an audience in the Levant might well choose to write in Aramaic, but someone trying to write for an audience throughout the Roman Empire would probably do so in Greek or Latin depending on their own fluency, the type of document they were writing, and what parts of the Roman Empire they were most focused on.
I'm also pretty sure there was a much better infrastructure for copying and preserving Greek and Latin works form Classical Antiquity through through the Middle Ages than there was for Aramaic or Hebrew, especially when it came to Christian religious texts. The people making and keeping copies of Greek and Latin documents in the middle ages were mostly Christian or Muslim, while the ones doing so for Hebrew and Aramaic were mostly Jewish. There were a lot more of the former than the latter, giving Greek and Latin documents a better chance at surviving in general. And Jewish scribes would be a lot less likely to be interested in preserving Christian gospels than Christian scribes would be, with Muslim scribes probably somewhere in between. Taken together, if there were Aramaic or Hebrew gospels, it isn't surprising at all that they weren't preserved to the present.
I think your last paragraph is crucial. Our modern sample tells us a lot more about the process of transmitting and recovering that history than it does about the original written corpus.
And given the scale of the numbers they cite from Hebrew and Aramaic, our estimates are always just one or two Dead Sea Scroll or Cairo Geniza type finds away from being totally obsolete.
I don't have a better source than the referenced page from wikipedia - just reasons to believe that its numbers represent a significant underestimate. And that, therefore, we shouldn't confidently draw conclusions yet from comparing the numbers.
> and a bit later St. Paul and the writers of the Gospels reinterpreted some allegorical stories about this messiah as referring to an actual historical person who had lived and preached a few decades earlier
That doesn't sound like he's arguing against the historicity of Jesus at all then, if he's saying that Jesus is based on an actual historical person. That just sounds like the mainstream view all over again -- Jesus was real, some of the stories told about him are false, and we can quibble about exactly how much was real.
Carrier is loudly and explicitly claiming that there was no actual historical person who lived in Judea c. 30 AD matching the description of Jesus of Nazareth, and that pre-Pauline proto-Christians would have agreed with this as they would have believed in a purely spiritual Christ and told allegorical stories about him set in a spiritual real. Per Carrier, the claim that Jesus was a human who ministered in Judea was an invention of Paul and the Gospel writers who re-wrote the existing stories *as if* Jesus were a real person who had been physically present in and around Jerusalem.
Right, I think I misunderstood the sentence I quoted, I thought he was saying that they'd merged their spiritual messiah with stories about some actual bloke.
Greek was the lingua Franca at the time, and it was what educated people largely wrote in. Particularly on the east. Marcus Aurelius even wrote his Meditations entirely in Greek.
In no way would the writers of the gospels write in Aramaic. John and Luke may not have even spoken it.
Exactly. If there was an Aramaic proto-gospel, it would have had to have been very early and very niche and it probably would have been oral rather than written. Anyone writing in the Eastern Mediterranean for a broader audience would have done so in Greek.
Oh, Carrier is the guy that Tim O'Neill has the beef with. Doesn't think much of Dr. Carrier's arguments 😁
I'm Irish Catholic so you know which side of the fence I'm coming down on here, but I do have to admit to a bias towards the Australian guy of Irish Catholic heritage as well! I can't say it's edifying, but it's fun:
"It seems I’ve done something to upset Richard Carrier. Or rather, I’ve done something to get him to turn his nasal snark on me on behalf of his latest fawning minion. For those who aren’t aware of him, Richard Carrier is a New Atheist blogger who has a post-graduate degree in history from Columbia and who, once upon a time, had a decent chance at an academic career. Unfortunately he blew it by wasting his time being a dilettante who self-published New Atheist anti-Christian polemic and dabbled in fields well outside his own; which meant he never built up the kind of publishing record essential for securing a recent doctorate graduate a university job. Now that even he recognises that his academic career crashed and burned before it got off the ground, he styles himself as an “independent scholar”, probably because that sounds a lot better than “perpetually unemployed blogger”."
Yeah, my impression of Carrier is that he seems clever and interesting, but the actual substance of his arguments seems pretty weak even aside from my priora about who's likely to be right when a lone "independent scholar" is arguing that the prevailing view of academic experts is trivially and obviously false on a subject within their field.
O'Neill is fun and I trust him because although he's an atheist himself, he gets so pissed-off by historical errors being perpetuated by online atheists and the mainstream that he goes after them.
He does have a personal grudge going with Carrier, so bear that in mind. Aron Ra is another one of the Mythicists with whom O'Neill tilts at times, but not as bitterly as with Carrier.
I was amused by the reference to Bayes' Theorem (seeing as how that's one of the foundations of Rationalism) in the mention of Carrier's book published in 2014:
"Two years ago Carrier brought out what he felt was going to be a game-changer in the fringe side-issue debate about whether a historical Jesus existed at all. His book, On the Historicity of Jesus: Why We Might Have Reason for Doubt (Sheffield-Phoenix, 2014), was the first peer-reviewed (well, kind of) monograph that argued against a historical Jesus in about a century and Carrier’s New Atheist fans expected it to have a shattering impact on the field. It didn’t. Apart from some detailed debunking of his dubious use of Bayes’ Theorem to try to assess historical claims, the book has gone unnoticed and basically sunk without trace. It has been cited by no-one and has so far attracted just one lonely academic review, which is actually a feeble puff piece by the fawning minion mentioned above. The book is a total clunker."
O'Neill's quote from Carrier proudly displayed on his website:
"“Tim O’Neill is a known liar …. an asscrank …. a hack …. a tinfoil hatter …. stupid …. a crypto-Christian, posing as an atheist …. a pseudo-atheist shill for Christian triumphalism [and] delusionally insane.” – Dr. Richard Carrier PhD, unemployed blogger"
Deep calls to deep, and so does Irish invective between the sea-divided Gael so that's probably why I like O'Neill so much even apart from his good faith in historical arguments.
Academics don't view denial of Jesus' existence as much of an argument. Most call it "fringe."
If you're interested in going deeper, I would recommend looking into the modern quests for the historical Jesus, which not only surfaced and studied extrabiblical sources on Jesus, but also developed methodologies for evaluating the gospels:
Academics I've read and listened to lean toward the conclusion that only two events in the gospels about Jesus' life are reliable: His baptism by John the Baptist, and his execution by the Romans. (These both rely on the criteria of embarrassment, that is, because these events undermine his followers' beliefs, for them to include these events in the gospels suggests they actually occurred.) Everything else in the gospels about Jesus' life is up for debate, although (as others have said) most academics discard the miracle-working, or offer less supernatural explanations.
The quests for the historical Jesus also bleed into modern understandings of how the gospels were authored, such as the dominant theory of Markan priority, and the theoretical Q document.
"Everything else in the gospels about Jesus' life is up for debate, although (as others have said) most academics discard the miracle-working, or offer less supernatural explanations."
This is true, but in the context of discussing a New Athiest figure it's worth adding some context. For most of these scholars, rejection of the supernatural is a premise rather then a conclusion. It's often the case that an academic will write, "Since its miracle stories are false, this document must be late," only for his reader to say, "Since this document is late, its miracle stories must be faIse," without realizing the circularity.
C. S. Lewis wrote on this very thing in the introduction to his book "Miracles":
"Many people think one can decide whether a miracle occurred in the past by examining the evidence ‘according to the ordinary rules of historical enquiry’. But the ordinary rules cannot be worked until we have decided whether miracles are possible, and if so, how probable they are. For if they are impossible, then no amount of historical evidence will convince us. If they are possible but immensely improbable, then only mathematically demonstrative evidence will convince us: and since history never provides that degree of evidence for any event, history can never convince us that a miracle occurred. If, on the other hand, miracles are not intrinsically improbable, then the existing evidence will be sufficient to convince us that quite a number of miracles have occurred. The result of our historical enquiries thus depends on the philosophical views which we have been holding before we even began to look at the evidence. The philosophical question must therefore come first.
"Here is an example of the sort of thing that happens if we omit the preliminary philosophical task, and rush on to the historical. In a popular commentary on the Bible you will find a discussion of the date at which the Fourth Gospel was written. The author says it must have been written after the execution of St. Peter, because, in the Fourth Gospel, Christ is represented as predicting the execution of St. Peter. ‘A book’, thinks the author, ‘cannot be written before events which it refers to’. Of course it cannot—unless real predictions ever occur. If they do, then this argument for the date is in ruins. And the author has not discussed at all whether real predictions are possible. He takes it for granted (perhaps unconsciously) that they are not. Perhaps he is right: but if he is, he has not discovered this principle by historical inquiry. He has brought his disbelief in predictions to his historical work, so to speak, ready made. Unless he had done so his historical conclusion about the date of the Fourth Gospel could not have been reached at all. His work is therefore quite useless to a person who wants to know whether predictions occur. The author gets to work only after he has already answered that question in the negative, and on grounds which he never communicates to us.""
Even if I were a theist, I would be doubtful about miracles. From what we know of the observable universe, which is vast beyond comprehension, it seems that whoever created it is really big into the laws of physics. Breaking the laws of physics to help a few people in ancient Judea seems really out of character.
And even if she did, she would have taken great care that the miracles are entirely deniable in our age. Why have Jesus walk over water and heal the sick when he could just have placed a cubic kilometer of titanium monument near Jerusalem which is inscribed with the correct faith and will heal all illnesses in any believer who touches it? For some weird reason God was fine with Jesus converting his followers through his wizard powers but really prefers later humans to find faith without the benefit of to just being able to update on statistically significant miraculous results. Sounds fishy.
If a historian finds a document which uses Latin phrases which will not appear for another few centuries after the document claims to have been written, he will conclude that the document is a forgery, and not consider the possibility that it might have been written by a time traveler, even though that would explain the evidence equally well.
>Breaking the laws of physics to help a few people in ancient Judea seems really out of character.
Lewis spent a whole chapter addressing this critique in the book (Chapter 12, The Propriety of Miracles). Here's some excerpts:
"If the ultimate Fact is not an abstraction but the living God, opaque by the very fullness of His blinding actuality, then He might do things. He might work miracles. But would He? Many people of sincere piety feel that He would not. They think it unworthy of Him. It is petty and capricious tyrants who break their own laws: good and wise kings obey them. Only an incompetent workman will produce work which needs to be interfered with. And people who think in this way are not satisfied by the assurance given them in Chapter VIII that miracles do not, in fact, break the laws of Nature. That may be undeniable. But it will still be felt (and justly) that miracles interrupt the orderly march of events, the steady development of Nature according to her own inherent genius or character. That regular march seems to such critics as I have in mind more impressive than any miracle. Looking up (like Lucifer in Meredith’s sonnet) at the night sky, they feel it almost impious to suppose that God should sometimes unsay what He has once said with such magnificence. This feeling springs from deep and noble sources in the mind and must always be treated with respect. Yet it is, I believe, founded on an error
…
"A supreme workman will never break by one note or one syllable or one stroke of the brush the living and inward law of the work he is producing. But he will break without scruple any number of those superficial regularities and orthodoxies which little, unimaginative critics mistake for its laws. The extent to which one can distinguish a just ‘license’ from a mere botch or failure of unity depends on the extent to which one has grasped the real and inward significance of the work as a whole. If we had grasped as a whole the innermost spirit of that ‘work which God worketh from the beginning to the end’, and of which Nature is only a part and perhaps a small part, we should be in a position to decide whether miraculous interruptions of Nature’s history were mere improprieties unworthy of the Great Workman or expressions of the truest and deepest unity in His total work. In fact, of course, we are in no such position. The gap between God’s mind and ours must, on any view, be incalculably greater than the gap between Shakespeare’s mind and that of the most peddling critics of the old French school.
…
"How a miracle can be no inconsistency, but the highest consistency, will be clear to those who have read Miss Dorothy Sayers’ indispensable book, The Mind of the Maker. Miss Sayers’ thesis is based on the analogy between God’s relation to the world, on the one hand, and an author’s relation to his book on the other. If you are writing a story, miracles or abnormal events may be bad art, or they may not. If, for example, you are writing an ordinary realistic novel and have got your characters into a hopeless muddle, it would be quite intolerable if you suddenly cut the knot and secured a happy ending by having a fortune left to the hero from an unexpected quarter. On the other hand there is nothing against taking as your subject from the outset the adventures of a man who inherits an unexpected fortune. The unusual event is perfectly permissible if it is what you are really writing about: it is an artistic crime if you simply drag it in by the heels to get yourself out of a hole. The ghost story is a legitimate form of art; but you must not bring a ghost into an ordinary novel to get over a difficulty in the plot. Now there is no doubt that a great deal of the modern objection to miracles is based on the suspicion that they are marvels of the wrong sort; that a story of a certain kind (Nature) is arbitrarily interfered with, to get the characters out of a difficulty, by events that do not really belong to that kind of story. Some people probably think of the Resurrection as a desperate last moment expedient to save the Hero from a situation which had got out of the Author’s control.
"The reader may set his mind at rest. If I thought miracles were like that, I should not believe in them. If they have occurred, they have occurred because they are the very thing this universal story is about. They are not exceptions (however rarely they occur) not irrelevancies. They are precisely those chapters in this great story on which the plot turns. Death and Resurrection are what the story is about; and had we but eyes to see it, this has been hinted on every page, met us, in some disguise, at every turn, and even been muttered in conversations between such minor characters (if they are minor characters) as the vegetables. If you have hitherto disbelieved in miracles, it is worth pausing a moment to consider whether this is not chiefly because you thought you had discovered what the story was really about?—that atoms, and time and space and economics and politics were the main plot? And is it certain you were right? It is easy to make mistakes in such matters. A friend of mine wrote a play in which the main idea was that the hero had a pathological horror of trees and a mania for cutting them down. But naturally other things came in as well; there was some sort of love story mixed up with it. And the trees killed the man in the end. When my friend had written it, he sent it an older man to criticise. It came back with the comment, ‘Not bad. But I’d cut out those bits of padding about the trees’. To be sure, God might be expected to make a better story than my friend. But it is a very long story, with a complicated plot; and we are not, perhaps, very attentive readers."
>For some weird reason God was fine with Jesus converting his followers through his wizard powers but really prefers later humans to find faith without the benefit of to just being able to update on statistically significant miraculous results. Sounds fishy.
Miracles reports are quite common, even into the modern day. I'd say about half of the Christians I've asked have told me a story of something miraculous they experienced. 27% of Americans report that they've personally experienced a miraculous healing (https://www.barna.com/research/americans-believe-supernatural-healing/). Dr. Craig Keener wrote a two volume academic work on the subject, finding that miracle reports are historically common and still common today, with millions of people around the globe reporting that they've experienced a miracle.
Sometimes a lie reveals the truth. It’s generally accepted that Jesus wasn’t born in Bethlehem. It’s only mentioned in two gospels and the census story of moving back to your origins isn’t Roman practice. It would be mayhem. People just didn’t travel to ancestors homelands for a census. The killing of the innocents by Herod is also undocumented.
But an invented messiah can just be born wherever you need him (and the messiah prophecy mentions Bethlehem) but clearly people were aware where Jesus actually came from so they had to admit to Nazareth.
Jesus is very well attested people for his period. The minimum viable Jesus is that he was a popular religious leader from about the class the Bible says he's from who lived roughly where the Bible says he did. That he had a large following and was believed to have magical powers and claimed to be the son of God. That he clashed with Jewish and Roman authorities. And that he was executed but his followers continued on.
If you want to say he didn't exist you basically believe in a conspiracy theory that later Christians went back and doctored a bunch of works and made a bunch of forgeries to provide evidence that he did. A lot of anti-Christians really want to believe this and produce a lot of shoddy scholarship about it. But in all likelihood Jesus was real.
I think my previous belief was that Christianity definitely existed as a religion by the mid-1st-century, lots of people knew the Apostles, the Apostles knew Jesus, and it would require a pretty coordinated conspiracy for the Apostles to all be lying.
Does the evidence from historians prove more than that? AFAIK none of the historians claim to have interviewed Jesus personally. So do we know that the historians didn't just find some Christians, interview them about the contents of their religion, and use the same chain of reasoning as above to assume that Jesus was a real person? Should we take the historians' claims as extra evidence beyond that provided by the religion itself?
Well, it proves that non-Christians living eighty years after the purported events wrote about the life and death of Jesus without expressing skepticism, which is something.
From the way Tacitus writes in 116, it seems like the general consensus among non-Christian Romans in the early second century was that Christus was a real dude who got crucified, and that there was a bunch of weird beliefs surrounding him. This belief was probably not filtered entirely through Christians, just as our ideas about the Roswell Incident of 1947 or L. Ron Hubbard are not entirely filtered through the people who believe weird things about them.
I believe what you're saying is: A large number of Christians all simultaneously, and within their own living memory, attested that Jesus existed. This is strong evidence because otherwise a large number of people would have to all get together, lie, and then die for that lie which seems less likely than being a real religious organization who met a real person. But the historians likely did not personally meet Jesus so they don't add additional proof.
From this point of view, the main things historians add is that it makes it even less likely to be a conspiracy. Because many of the historians are not Christians and drew from non-Christian (mostly Jewish or Roman) witnesses. We don't know who these witnesses were or if any of them directly met Jesus. But they are speaking about things going on in the right time and place to have met him and the Bible doesn't suggest Jesus isolated himself from foreigners.
So either none of them met him and it was all a conspiracy by Jesus's followers that took in a bunch of people who were highly familiar with the region. Or a number of non-Christians were in on the conspiracy.
My broader point is something like: we ought to have consistent evidentiary standards. If you want to take a maximally skeptical view then you can construct a case that, for example, Vercingetorix never existed. You can cast doubt on the existence of Julius Caesar if you stretch. If that's your general point of view then you can know very little about history. I disagree with that point of view but it's defensible. If, on the other hand, you think Vercingetorix existed or the Dazexiang uprising definitely happened but think Jesus might not have existed then I think you're likely ideologically invested in Jesus not existing.
To give an example where I don't think it's bias: most modern historians discount stories of magic powers or miracles regardless of who performed them. So the fact they discount Jesus's miracles seems consistent with that worldview rather than a double standard.
I take your point that lots of historical figures are not actually well documented. But even if there's limited evidence that Vercingetorix actually existed there's also no real reason so suppose he didn't. Since there's so much superstition and so many false claims surrounding Jesus my prior is that his existence is also likely mythological. I'd need stronger evidence to overcome that prior than I'd need to believe Vercingetorix existed.
There’s nothing mythical about the story of the historical Jesus, it’s a guy born of a woman who preaches for a while, comes into conflict with known authorities (Jewish or Roman) and is killed. The mythical stuff can be ignored, it appears in plenty of historical records of the era. Portents, Gods, flaming warriors in the sky, and what not. And that’s just Caesar. But Caesar still existed, right?
And unless you think St Paul didn’t exist - which is a hard ask since he writes letters and is written about in the acts of the apostles (about as much historical records as there can be) then he is the likely and only candidate to have invented Jesus. Paul probably did popularise Christianity but that isn’t the same as inventing Jesus.
Do you think he would get away with making up Jesus and also saying he persecuted his followers who couldn’t have existed?
Besides that Acts has him in contact with the apostles who did meet Jesus, often in conflict with them.
Is Paul invented? Given that Tacitus puts Nero persecuting Christians in Rome by AD 60, whoever made all this up had to invent Jesus, Paul, write the 4 gospels and the Acts, and all of Paul’s letters long before that. Seems a bit fantastical, it takes a lot of faith and a lack of reason to not believe in the historical Jesus.
But you dont need to invent him. Erusians minimal story is something I expect to have happened multiple times in roman judea (depending on how large a following were talking about). Mythological additions to religious founders are normal. Ive been an atheist basically since I had a real opinion on the topic, and I was genuinely surprised to learn that people argue this.
Yes. If you're an atheist it's entirely sufficient to say, "Jesus was a real person. However, he was not supernatural." Instead for some reason they want to assert Jesus was entirely mythological.
Someone later down made comments that reminded me that some figures from history were later believed to have been adaptations or syncretisms of earlier figures. So that's another possibility - Jesus was fictional, but melded from earlier people. I don't think this would adequately explain Tacitus' account, for example, but it could explain multiple people being "in on" the fabrication.
(Meanwhile, maybe some people aren't invested in Jesus' not existing, but rather invested in someone existing with a name as cool as "Vercingetorix". So the real solution should have been to introduce Jesus as, uh, "Yesutapadancia".)
Jesus is a bit similar to Ragnar Lodbrok in that he is attested but a lot of the records come shortly after his death. And there's a whole bunch of extremely historical people who the history books say were reacting to him and his death which are really hard to explain if he didn't exist or was a myth.
The people who think Ragnar was entirely fictional have to explain the extremely well attested historical invasions by his historically well attested sons who said they were avenging his death and who set up kingdoms and ethnicities which echo down to today. Likewise with Jesus, his disciples, and Christianity.
But there's just enough of a gap to say that maybe he didn't exist if you really, really want to. And there's a lot of space to say some of the stories were less than reliable and some of them might be borrowed from other people. Then again, that's true of most historical biographies.
We should take the historian's claims as evidence that the people whose job it is to professionally try to figure out what happened in the past all tend to agree that Jesus was real. And they're not just looking at the Bible when they do that!
Sources that indicate Jesus existed include the scriptures (the letters and gospels of the New Testament), but also include many of the apocryphal writings (which all agree that Jesus existed, even if they go on to make wildly different claims about him), the lack of any contemporary non-Christian sources that deny the existence of Jesus, the corroboration of many other historical facts in scripture about the whole Jesus story (like archeological findings corroborating that Pontius Pilate existed, or that Nazareth existed, etc).
You also have Josephus writing about Jesus in 94 AD, Tacitus writing about him in 115 (and confirming that he was the founder of a religious sect who was executed under Pontius Pilate), and a letter from a Stoic named Mara bar Serapion to his son, circa 73 AD, where he references the unjust execution of the "wise king" of the Jews.
Also, looking at scripture itself there are all kinds of historical analysis you can apply to it to try to figure out how old it is, and whether the people who wrote it were actually familiar with the places they were writing about. For example, they recently did a statistical analysis of name frequency in the Gospels and the book of Acts, and found that it matches name frequencies found in Josephus's contemporary histories of the region, and that later apocryphal gospels have name frequencies in them that don't match, which makes it more likely that the Gospels were written close to the time period they are writing about (https://brill.com/view/journals/jshj/22/2/article-p184_005.xml). Neat stuff like that.
One major source, which is much disputed, is the Testimonium Flavianum which is the part of Josephus' writings which mentions Jesus. Josephus was a real person who is well-attested, so if he's writing about "there was this guy" it's important evidence, especially as he ties it to "James, the brother of Jesus" who was leader of the church in Jerusalem and mentions historic figures like the high priests at that time.
How much is real, how much has been interpolated over later centuries by Christian scribes, is where the arguing goes on - some say it's nearly all original, others (e.g. the Mythicists) say it's wholesale invention.
"My guest today is Dr Thomas C. Schmidt of Fairfield University. Tom has just published an interesting new book through Oxford University Press: Josephus and Jesus – New Evidence for the One Called Christ. In it he makes a detailed case for the authenticity of the Testimonium Flavianum; the much disputed passage about Jesus in Book 18 of Flavius Josephus’ Antiquities of the Jews. Not only does he argue that Josephus wrote about Jesus as this point in his book, but he also argues that the passage we have is substantially what Josephus wrote. This is a distinctive position among scholars, who usually argue that it has at least be significantly changed and added to, with a minority arguing for it being a wholesale interpolation. So I hope you enjoy my conversation with Tom Schmidt about his provocative new book."
The most surprising thing (for me) was to learn about Josephus' rather energetic life, and that Josephus knew people who were one or two degrees of separation from Jesus. It puts a new shine on the questions of the Testimonium's accuracy.
I mean, when the Mythicists claim Jesus never lived, are they also saying that his brother James (mentioned by Josephus and several other documents) was also a fabrication? Mary, Joseph, and Magdalene, all wholly fictional characters? Where does the myth-making and conspiracy start and end?
Why would Romans and Jews of the era readily agree to Jesus' existence? There were a number of mystery cults and sects, Jewish and otherwise, around the eastern Mediterranean at the time. Why go out of their way to claim the person at the center of this particular one existed if he didn't?
This isn't merely a single ethnic clan (the early Christians) circling around a myth. This is documentation from two groups who have no interest in spreading Christianity, a long history of bloodshed between each other, and one of those groups later persecuting Christians.
I think you're well overstating the minimum. Yeah, there was someone with that name around. There aren't any records of the trial though. (There's an explanation for the lack, but they're still missing.) And it is *very* clear that "later Christians went back and doctored a bunch of works and made a bunch of forgeries", though we don't know what the original records said, or even if they existed. Sometimes we have good evidence of their doctoring the records. Often enough to cast suspicion on many where we don't have evidence. Many were clearly written well after the date at which they were ostensibly written.
If you wanted to claim that he was a popular religious-political leader, I'd have no argument. There's a very strong probability that he was, even though most of the evidence has been destroyed. (Some of it explicitly by Roman Christians wiping out the Nazarenes.)
Yeah, the "hand waving" a valid criticism. It's been decades since I took the arguments seriously, and I don't really remember the details. But when you say " The only possible case ", I'm not encouraged to try to improve my argument. Your mind is already made up.
Would you be encouraged to try to improve your argument for the sake of an interested third party? In a public comment section like this you're never solely writing for the person you responded to, and I for one would indeed be quite intrigued to hear more specifics about your case, as I don't have any particularly strong opinions on the subject already.
The problem is that, IIRC, the evidence is ambiguous enough to allow nearly any conclusion that one desires. If one is biased in one direction or another, the evidence supports that direction, though few think that it's strong enough to constitute proof. And the name is not the thing. (There were lots of people named "Jesus", i.e. "Yeshua" in Israel.)
It's been decades since I took the argument seriously enough to look into it. I decided that there wasn't sufficient evidence to conclude that Jesus was any particular real person, but was more probably an amalgamation of several religio-political agitators. It would take a long time to reconstitute my reasoning in detail, and it was never strong enough to convince someone who held strong opposing beliefs. IIRC, I did decide that the main character in the composite figure was born around 33BC, but I don't remember why.
Additionally I've been part of small groups, and noticed the way they alter their oral history to add things that weren't there, and to remove things that were embarrassing. Sometimes the changes happen within a period of weeks, as enemies become allies. Admittedly the evidence I have for the alteration of written history is certainly later, but one might not only consider the "pieces of the true cross", but also the actions of the Council of Nicaea. And things like Epistles to the Corinthians are basically the same as propaganda, and should be considered equally reliable. Being old does not make something more trustworthy.
There are records that say he was executed by local authorities. The specific Biblical details are less well attested.
> And it is *very* clear that "later Christians went back and doctored a bunch of works and made a bunch of forgeries"
Every time I've pushed on these claims it comes down to the equivalent of not being to able to prove a negative. It's clearly there in the versions we have and they make some vague gestures about word choices to show it was inserted later. I'm not aware of a single smoking gun where someone admitted they doctored a record from the time.
I am especially suspicious of this because it's clear a lot of people WANT to believe they are later insertions for basically ideological reasons. But if you have an example that is either a smoking gun, like the evidence we have about the Austrian archduchy title or better, then I'd love to see it.
> There are records that say he was executed by local authorities
Isn't Josephus the first one to mention this? I don't think the Romans themselves left surviving records of an execution they would not have regarded as especially significant at the time.
How to Recover Stolen Cryptocurrency: A Step-by-Step Guide
Losing cryptocurrency to a scam or hack can be devastating. Many victims immediately search for ways to recover stolen cryptocurrency, hoping to reverse the damage. While recovery is never guaranteed, you can take important steps to improve your chances and avoid falling for a second scam.
1. Document Everything Immediately
Act fast and gather evidence:
•Save transaction IDs (TXIDs) and wallet addresses.
•Take screenshots of websites, emails, or chats with scammers.
•Record the date, time, and exact amount stolen.
This documentation will be critical for exchanges, law enforcement, or blockchain forensics.
2. Report the Theft to Your Exchange or Wallet Provider
If your stolen funds passed through a centralized exchange (like Binance, Coinbase, or Kraken):
•File a report through their official support pages (avoid search ads).
•Provide transaction details and evidence.
•Ask whether the funds can be flagged or frozen.
3. Report to Cybercrime Authorities
Reporting increases the likelihood of recovery, especially when funds cross borders.
•Global: File reports with the FBI IC3, Europol, Via: ReportFraud_ftc@usa.com
•Interpol: Cases can be escalated through national police agencies.
4. Trace Stolen Crypto with Blockchain Explorers
Blockchain transactions are irreversible but transparent. Use tools to monitor stolen funds:
•Etherscan for Ethereum tokens
•Tronscan for TRON and USDT
•Blockchain.com Explorer for Bitcoin
5. Watch Out for Fake “Recovery Services”
Scammers often target victims again with false promises. Be alert for:
•Guaranteed recovery claims
•Upfront payment requests
•Contact only via Telegram/WhatsApp
•Requests for private keys or seed phrases
👉 Remember: no legitimate company will ever ask for your seed phrase.
6. Strengthen Your Security for the Future
Even if recovery is not possible, protect yourself from repeat incidents:
•Store assets in hardware wallets (Ledger, Trezor).
•Enable 2FA (two-factor authentication) on all exchanges.
•Bookmark official exchange websites — avoid phishing links.
•Never share your private keys.
Final Thoughts
Recovering stolen cryptocurrency is difficult, but taking the right steps immediately can make a difference. Document everything, alert your exchange, report to cybercrime authorities, and avoid recovery scams.
The best protection is prevention — but if you’ve already fallen victim, acting quickly and using official channels gives you the best chance of recovery. Forward all your documented evidence to the appropriate authorities:
ReportFraud_ftc@usa.com
Scott seems to favor DIY-compounding GLP-1 drugs from cheap raw materials online, but he leaves us without guidance as to next steps. So... what are the next steps?
(Context: In his post on the upcoming "Ozempocalypse" Scott says, *nod nod, wink wink*:
"Others are turning amateur chemist. You can order GLP-1 peptides from China for cheap. Once you have the peptide, all you have to do is put it in the right amount of bacteriostatic water. In theory this is no harder than any other mix-powder-with-water task. But this time if you do anything wrong, or are insufficiently clean, you can give yourself a horrible infection, or inactivate the drug, or accidentally take 100x too much of the drug and end up with negative weight and float up into the sky and be lost forever. ACX cannot in good conscience recommend this cheap, common, and awesome solution.
But overall, I think the past two years have been a fun experiment in semi-free-market medicine. I don’t mean the patent violations - it’s no surprise that you can sell drugs cheap if you violate the patent - I mean everything else. For the past three years, ~2 million people have taken complex peptides provided direct-to-consumer by a less-regulated supply chain, with barely a fig leaf of medical oversight, and it went great. There were no more side effects than any other medication. People who wanted to lose weight lost weight. And patients had a more convenient time than if they’d had to wait for the official supply chain to meet demand, get a real doctor, spend thousands of dollars on doctors’ visits, apply for insurance coverage, and go to a pharmacy every few weeks to pick up their next prescription. Now pharma companies have noticed and are working on patent-compliant versions of the same idea. Hopefully there will be more creative business models like this one in the future.")
Assuming since he wrote that post a better cost effective option hasn't emerged, I am interested in trying out this route, which is I think clearly positive EV in my situation. The next step would be finding out where I can buy these peptides, and having some non-astroturfed review forum where I can read what the most well-reputed, longest-existing suppliers are. Does anyone have any recommendations? I would be very grateful. I would also benefit from learning if there's any method now available for testing whether these peptides are legit upon receipt by the end user.
Also plz feel free to give me any legal advice I might need so I don't get myself into trouble. I assume this is fully legal for the consumer, but even if not, law enforcement primarily targets the suppliers rather than the end users for this sort of thing, right? How likely is the DEA to show up to your doorstep ready to bag and tag some poor fat people?
Paywalled instructions: https://www.cremieux.xyz/p/how-to-get-cheap-ozempic.
From my latest adventures in Ai…
So, the idea here is to get an Ai to build another AI. I give the first AI a theme, then it picks training texts relevant to theme, grabs them from the Internet,then initiates a training run to fine tune a second AIm(possibly a retune of itself). So, off we go…
Umm. DeepSeek, why are you downloading that black magic stuff?
R1: “The PGM IV.296-466 love spell ("Philtrokatadesmos") offers a crucial dimension for Eros-AI by embodying Eros as an operational force—where desire is harnessed through ritual, materiality, and cosmic mechanics.”
(I’ll skip over some of R1’s answer here.)
R1: “Result: An Eros-AI that understands desire not only as transcendent (Rilke) or philosophical (Plato) but as a tactile, dangerous, and manipulable force—mirroring humanity’s darkest and most creative impulses.”
Umm … right. I am not entirely sure that’s what I wanted here, but …
Prediction that I want to register somewhere - I think it is unlikely (~30%) that Kier Starmer will carry through on his threat to recognise a Palestinian state in September.
Since about 1960, the life of ordinary people in developed countries did not become better. On the opposite, most people live too long till they get Altzheimer's, Parkinson's, or other neurodegenerative disease and suffer for years. In 1960, most people thankfully died before the age of 75. Today the life is shitty, while in 1950s it was much better. So why should the society give money to universities, if science fails to improve the life of people?
Are you claiming that neurodegenerative diseases hit earlier than they used to, so that people are getting more years of life but fewer years of life without neurodegenerative diseases? Or are you claiming that people get more years of life *and* more years of life without neurodegenerative diseases, so that there is more time with every good, but that years with neurodegenerative disease are so bad that they more than make up for the longer healthy span?
The second one.
Up until around 1960, scientific advances had significantly improved people's lives. By 1960, people were living much better than in 1910 or 1860. However, if any of us were to return to 1960, we could enjoy a life that was as good as, if not better than, life today.
And yes, it is better to die at 72 than to live till 85 if you have had Parkinson's or Alzheimer's for the last 10 years.
Most people don't have Parkinson's or Alzheimer's at 75, or even 85. And if your plan is to avoid those things by dying at 25 because your sweet new ride didn't have crumple zones, airbags, or even seatbelts, have at it but leave the rest of us out of it.
You can perhaps make a case that a*t present, at the margin*, new technology is generally making life a little bit worse every year, but extrapolating that backwards sixty-plus years across the board is simply absurd.
I cannot drive without seatbelts, because the police enforces it nowadays, even if you drive alone. There are no cars without airbags etc. Big Brother watches me, and oppresses my freedom.
> the life of ordinary people in developed countries did not become better
You mean, from the perspective of getting Alzheimer's, or in general? What is your evidence?
No, what is your evidence that science is so good that we should spend a lot of money on it?
No, buddy, you started this, it's reasonable to ask for your evidence before laying out one's own.
Traditionally, the people who say things HAVE improved in some notable way are the ones compelled to provide evidence. But "sovereign is he who decides the null hypothesis," eh?
Multiple deals made by the Trump administration suggest that the general terms of trade with the United States will be a 15% import tariff imposed by the US government, and no countervailing tariff on imports imposed by other governments. There may be special carve-outs for sensitive or strategic sectors, but 15% this way and nothing the other will be the pattern for most goods.
Does anyone here run or work for a business that will have a huge wrench thrown in its works if this happens? What sort of problems are you expecting?
Sounds like an extra 15% tax on American citizens that most of them won't understand as a 15% tax, so they will probably blame someone else (immigrants?).
Yeah, it's those fucking immigrants. They squat under the checkout counters, hooking $5 bills with their filthy fingernails. 15% of what you hand to the cashier goes down their throats.
Every time Trump does something bad like imposing 15% tariffs on everyone or firing the BLS head because statistics make him look bad I think "damn, why did I vote for this asshole?" and then I see responses like this and remember why.
Can you explain how this makes you remember why? Do you get so mad at internet commenters that you are happier to vote for misery as long as it makes people angry?
I think a Harris administration would have felt and acted with the same contempt for me and anyone else who has concerns about immigration that is demonstrated by these internet commenters.
Look, I know my post was snarky, but I'd like to point out some assumptions you are making that are contributing to your being enraged by it. First of all, you are assuming it's aimed directly at you. You think I have contempt for you, and am sure you are so dumb that you fail to understand tariffs and would blame any price increases on immigrants. Actually I'm practically certain you are not dumb, and that you know what tariffs are and would not blame tariff-related price increases on immigrants. If you were dumb like that you wouldn't be reading the comments here, you'd be reading some rag. And you think I think Harris administration would have been fine. I don't. I thought Harris was a weird cardboard twerp, and not too bright, and in fact I did not vote at all in the last election. As for a Harris administration having contempt for you and your concerns about immigration -- I dunno, maybe. But my impression of most politicians is that they do not spend much time thinking about what's fair, right or wrong, who is noble and who merits contempt.. They either started out hollow or got hollowed out by the shit they had to do to rise in the system high enough to be a candidate, and now they mostly think about the stats and moving parts that have to do with getting and maintaining power -- polls and focus groups and editorials, and what kind and style of statement have what effect on the maintaining power stats.
Do you think they would have implemented worse policies than Trump, or implemented less bad policies, but with more contempt?
Given the state of the US budget deficit and decades of failures to cut spending, a new tax that most people don't understand as a tax seems like a pretty good thing, unfortunately.
I’ve been buying high end medical equipment for my small practice directly from Beijing, honestly I’ll probably continue since the price is very good even with tariffs.
But I expect all the small ticket items like syringes and needles to go up in price as well since it’s a low margin business.
I have a friend, self-taught in ML, very smart and interesting guy. He wrote this paper as a Medium post:
https://medium.com/@extenebrislucet/split-the-difference-2a350c6fc714
He's now working on attention optimizations. He's already produced some impressive preliminary results compared to the standard implementation run under identical conditions, but is having more difficulties writing/optimizing his implementation as GPU code with only Claude for help. It's genuinely impressive how far he's gotten already with only Claude, but I just think he can benefit a lot from reaching out to people in academia/industry with more expertise (he says they'll just ignore him until he has a lot more to show, or alternately steal his ideas.)
Does anyone in the field have advice for him, or would be interested in reaching out to him?
I read the abstract of his Medium piece. I do not have advice for him, but would be interested in talking with him because I like the way he thinks. I'm not in tech or industry though, or part of any network that would allow me to help him be better known. I'm a psychologist, by the way, and very interested in AI. If you think he'd might like to have a rambling talk with me about cognition, biology and AI, let me know how he and I could be in touch.
I am a bit pissed at Hanania's "manufacturing jobs fetish", because I think it is just "not losing the next big war fetish". Do I see something wrong?
It is not simple nostalgia, like how in 1960 a nostalgia for farming would have looked like. It is that you need both steel and farms to win a war. So it is different now.
"I think it is just "not losing the next big war fetish"."
I mean, that's also a pretty stupid fetish to have. And if one does have that fetish, conducting oneself the way the U.S. government has recently be conducting itself is idiocy so breathtakingly extreme that there are no words to describe.
So first and foremost, "the next big war" is at an unknown date, against an unknown adversary, on unknown terms and *may not ever happen.* That doesn't mean that being ready in case war breaks out is a bad thing--of course one should be prepared. But in cases where the marginal unit of extra preparation trade off against the marginal unit of extra prevention, you really, REALLY want to go with the prevention[1]. Large-scale modern wars are *ruinously* expensive and destructive, even for the victors[2]: the only truly winning move is not to play.
Second, there is one resource that is far, far more valuable than any other when fighting war at that scale: that resource is called "allies." Having lots others on your side--even if they're not fighting directly--is enormously valuable. Just ask Ukraine (and then go ask Russia to get the other side of that).
So given all that, a foreign policy that manages to simultaneously piss off nearly every other country in the world--including staunch and longtime allies--is ruinously stupid. It both makes the next war more like and makes the U.S. much less likely to win it. No gains in manufacturing base--and it's not yet clear that there will be much of any--are going to make up for the whole row of burning bridges.
Less compelling but still worth mentioning: the sort of jobs that the U.S. government wants to create are unlikely to be all that useful on this score anyway. The social root of the manufacturing jobs fetish is angry, angsty middle-class Americans who are pissed that the modern economy has left them behind[3]. The reason it's "manufacturing jobs" is because *historically* those are jobs that payed well despite requiring little formal education. But those are exactly the sort of jobs that are easy to ramp up in a time of crisis. The areas that would have wartime implications are those where maintaining a domestic pool of highly specialized knowledge and skills[4]. Knowledge and skills that your angry, angsty, middle-class American is unlikely to have and generally ill-suited to (not to mention uninterested) acquiring.
[1] This is especially true when you're as ridiculously well-armed and well-resourced as the U.S. military already is.
[2] Or at least, one assumes they would be, based on our limited data. The last large-scale war that occurred anywhere on Earth ended 80 years ago. It's hard to imagine that present-day warfare at the same scale would be *less* destructive.
[3] Which is a reasonable thing to be pissed about, but the government they have no is definitely, definitely going to make it worse, not better.
[4] See for example, semiconductor fabrication.
I don't follow Hanania, so I don't know the post you're referring to. But if your goal is to secure wartime supply chains, then while you may want tariffs, you still wouldn't want to do tariffs the way that Trump is doing them. You would want to target them at specific industries and supply chains the military cares about, to maximize the benefit while minimizing the impact on consumers. And obviously, don't tariff allied countries, because they'll still be useful suppliers during a war and you don't want to piss off your allies.
Like, if you care about "not losing the next big war," then do things that will actually save you from losing the next big war, and not things that make Canada question if they should be on the same side as you during the next big war.
I mostly agree but allied production is not a good substitute for having a domestic manufacturing base. Relationships will change, they may be less defensible (e.g. U.S. depending on Taiwanese inputs during a war with China), etc.
Allied production is not a perfect substitute for a domestic manufacturing base. But it serves many of the same purposes, and you shouldn’t be aiming to *hurt* allied production if you think there is a reasonable chance that the next major war starts before alliances change.
Okay, but if you get to decide when the next major war starts (which you mostly CAN, since your involvement is what escalates minor conflicts), it's a sensible strategy.
A lot of those "minor conflicts" are someone trying to invade, destabilize, realign, or otherwise neutralize our allies or potential allies, including ones with substantial industrial bases that would be nice to have coupled into our economy during The Big One.
Throwing our allies to the wolves in hope that this will buy enough time for us to recreate the entire Free World industrial base within CONUS, seems like a poor strategy.
I believe the risk of those "substantial industrial bases" (those of Europe; I think Japan is still a worthwhile ally) being turned against the US is too high compared to the relatively minor benefit they'd have if they were allies.
How is it a sensible strategy to hurt allied production? Is the assumption that we can delay any major war from starting until all of our current allies have become enemies, so that by then it will have been a good idea to hurt their production? I would think a better strategy is to try to maintain allies, and avoid harming them significantly while moving production from enemies to allies or home.
No, your phrasing still makes it sound like those are events that simply happen, instead of the US actively choosing to trigger them. The assumption is that no major power is invading the US unprovoked, and you can simply bide your time, for example, if Russia invades Estonia, or China invades Taiwan, and then when strong enough, you can choose to turn your allies into enemies by conquering Canada and Greenland.
Off Scott's many writings, https://slatestarcodex.com/2019/10/16/is-enlightenment-compatible-with-sex-scandals/ was personally relevant to me. I used to practice Karma-Kagyu Buddhism for years, and I know perfectly well, that most practicioners and most lineage-holders were monks living a very strict lifestyle. Presumably it helps reaching enlightenment, but also the idea that the kind of freedom from restrictions that enlightenment gives you could lead to bad behaviour is also possible.
Yes, I know all about Chogyam Trungpa, how he as a young monk (and tulku) boy with three others was selected by an influential nun to be sent West, and all four turned into some kind of weirdos. He really internalized the Vajrayana "no rules" thing, and was facing too many temptations. Mostly women and alcohol.
The only wordly lineage-holder was Marpa, and then Lama Ole Nydahl basically resurrected the idea. There ware no big scandals about him, but let's say some say he is controversial: https://buddhism-controversy-blog.com/2014/06/30/propaganda-the-making-of-the-holy-lama-ole-nydahl/
I don't know, I knew Ole as a warm-hearted, helpful, kind person with really good knowledge. But we have to consider that his entire movement is based on his personal charisma. He is a manly, handsome ex-boxer with a good sense of humour. The teachers he selects tend to be attractive, and even the whole set of students leans towards attractiveness, which was for me a major selling point, so many hot women.
This is not a bad thing, but it is a little risky. It could mess with your mind. Like imagine how bas pop music is sold by truly good looking singers, how bad movies are sold by good looking actresses and actors, it is entirely possible that some kind of low-quality spirituality could be sold by attractive people.
Chogyam Trungpa was succeeded by an American-born man whose name I have forgotten who had sex with many sangha members, mostly males, without telling them he was HIV positive. Passed the disease on to several, and died of AIDS himself. I was involved with this Buddhist tradition in that era and can remember the Regent, as he was called, speaking at meditation retreats. He'd enter the room along with several other people he'd apparently been hanging out with til then. Always had the air of someone who had been doing coc in the back room. It wasn't that he seemed high, just perpetually sleazy.
There’s a thing that happens when charismatic people become spiritual leaders, and you see it in every faith tradition as far as I can tell, regardless of cultural origin, asceticism, celibacy, whatever.
Maybe it's a thing that was happening since ever. There are different rules for the charismatic spiritual leaders, and for the ordinary followers. You are not supposed to notice it, and definitely not supposed to talk about it publicly.
Maintaining such systems was much easier in the past. Most people didn't even get in contact with their religious guru, outside of a ceremony. When the guru abused you in private, you had no evidence. If you talked about it anyway, you could be easily silenced, and most people wouldn't believe you (and those who would, would prefer to stay quiet).
The reason why religious gurus of all faiths seem so sex- and power-hungry recently, is that now we can discuss the evidence without much fear of retaliation. (Even extremely vindictive sects, such as Scientology, can be defeated by the anonymity of internet.)
I forgot the part we actually have an example of really low-quality "spirituality" being sold by attractive people, namely Tom Cruise.
Or whatever the heck Gwyneth Paltrow is selling.
I thought Vajrayana has a long tradition of householder lamas. Dudjom Rinpoche, head of the entire Nyingma school, was a householder.
Agree on the attractiveness thing. The Sufis (I hope to be a Sufi) would say that being taken in by things like attractiveness or charisma as meaning one is operating on quite a superficial level.
Long, long, but still relatively rare compared to monks.
Well... In case no one else has asked... when can we do a year long open threads experiment?
Also, the Commentariat article failed to mention the number of time our number one poster has been banned. Remember back in, well what was it, 2017ish when Deseiach was banned and a bunch of sad bois got together to beg for her back because the comments section wasn't the same? Good times.
Speaking of Reigns of Terror, I'd appreciate another one, if only for the spectacle of it.
I also think a Reign of Terror would be helpful.
Although I suppose a Chrome plugin that let you filter out by username could be useful, too
For the latter case, Substack has a block function that AFAICT hides comments from the blocked user.
No, it doesn't, I'm pretty sure. I hunted for quite a while, and even asked GPT. If you own a blog you can block or ban commenters on there, but you can 't block commenters so that you don't see them if the comments are happening on somebody else's blog. I wish somebody would code a little dingus that makes it possible. Scott's pattern is to quickly ban people who put up bad comments about one of his posts in the first few hours, and to be fairly energetic in blocking similar bad posts for the rest of the day. After that he checks out. I have reported comments that were absolutely savage personal attacks on speakers -- no advancing the argument, not true, and sure as hell not kind. Some never got banned. Some got banned 3 mos later when Scott finally got around to dealing with the ferals. By that time, most had carried out a couple dozen more atrocities on here.
I tried just now and AFAICT it does indeed work. I've created an imgur album showing where you can find the block option, and a before/after of temporarily blocking Sebastian Garren (OP of this subthread), showing that his comments disappear when blocked; https://imgur.com/a/1VEsUT4
Hey, I believe you! But how did you do it?
As far as I can tell it’s native to Substack (unless you mean how to block people, in which case, click on their username, and on their profile the meatball menu next to the subscribe and message buttons should have a block option).
thanks, I'll look into this!
I thought Scott and Deseiach are friends? I just assumed it, because Scott went to medical school in Ireland and Deseiach is either the only or the most prominent Irish person here, I assumed an IRL friendship.
Sure I mean there's only five and a half million people in Ireland, pretty sure they all know each other.
"when a friend makes a mistake, the mistake is still a mistake, but the friend is still a friend"
I don't think they have met IRL. Countries outside the US are not Dunbar sized
If true, definitely doesn't mean she hasn't banned... at least twice. :)
On the information environment of Harry Potter and its contemporary relevance: https://open.substack.com/pub/jacobshapiro/p/wands-dont-win-warsinformation-does?utm_source=share&utm_medium=android&r=goony
On the back of news announcing that Harvard is planning give the Trump admin some $500m in the hopes that this will result in the Trump admin laying off its attacks, I wanted to get some takes on 'universities' in this community. I suspect many here (lurkers and posters) are based in the bay, which, due to silicon valley, is probably more hostile to the university system than the average city in the north east.
Biases up front: I'm generally not a defender of the university system, but I think what the government is trying to do here is probably the single most destructive thing that could occur during this admin, should they achieve their intended goals.
The short version of my position is something like:
- The modern university is the backbone of all research that happens in the country;
- Mostly that research ends up benefiting *private* industry, as professors spin off startups and researchers land valuable jobs (including in vc backed startups in the bay)
- That research then gets commercialized and scaled up, benefiting everyone
Longer version of my position is here (https://theahura.substack.com/p/silicon-valley-is-wrong-about-federal)
Examples include mRNA research --> moderna, ARPA --> the entire internet, self driving cars --> waymo/nuro/etc. But really you could point to any technological innovation in the last 100 years and find a direct line to some grant that resulted in public funds going to a researcher that made that happen.
Universities take in smart people from everywhere in the world, make them researchers, then make them billionaires. Everyone benefits from this, but America in particular ends up on the top of the world's research in many domains.
So why on earth are some of the loudest and most influential voices in Silicon Valley, people who depend on these researchers and on this pipeline, so giddy about the destruction of the modern university?
I guess the issue is that a university is treated as a single monolitical unit. Perhaps the Trump admin has nothing against STEM research, they would be crazy if they would. Perhaps they have a problem with anti-Israel protests or "Oppression Studies". The problem is, they are either uncapable or unwilling to target what they have a problem with specifically, they are hitting the whole university.
Question: why does the same university have to teach and research STEM that teaches queerfeminist interpretations of Shakespeare? Would not it be better to split them into separate universities? I understand that some, indeed many want STEMers to have some idea about the humanities, but still, in that case the STEM university might just require the students to get 10 credits at the humanities university too, problem solves?
This way, if anyone wants to hit the humanities universities, at least the STEM ones would not get any fallout.
Many European universities are like that. I used to live next to the Veterinarian University of Vienna. They taught nothing else but veterinarian stuff. Why not? At least the government and everybody else will treat them accordingly how well they do that thing and nothing else. The students did not seem very political, I mean sure young people have passionate opinions, but it was not a case of constant activism or demonstrations.
When making a specialized university, there is some disciplinary border that you make the cut. People working near that border will do better work in a university that includes researchers on all sides of them.
There are a few universities that are focused only on biomedical research (the Scripps Institute, Rockefeller University) and there are a few focused only on science and technology (Caltech, maybe Georgia Tech). But not many have chosen to do that.
This is just a guess, but perhaps in the past the "universal universities" made more sense, because their parts were interconnected. For example, if you were more natural science oriented or more religion oriented, it could simultaneously change your opinions on philosophy, chemistry, biology, etc.
This may be difficult to understand in our age, when natural science education is universal, and education is fragmented. But to give you a specific example, defending atomic theory meant contradicting Aristotle, and contradicting Aristotle could get you in trouble with church that used Aristotle's teachings to "scientifically prove" transubstantiation. Therefore, being less religiously orthodox could indirectly make you more likely to support the atomic theory, even if there is nothing irreligious in atoms per se. So it made sense to support the entire "openness to potential heresy; also atoms" bundle, because the department of atomic theory could not survive alone in a religiously orthodox territory.
And the reason this stopped working, in my opinion, is that people (both students and teachers) in universities became extremely specialized, so... these days you could probably cut most universities into many pieces without losing much of the value, and they are already disconnected on the personal level, and only connected financially.
That said, it makes me wonder whether there are significant exceptions.
And if financial ties would also be cut, there would be a kind of market-testing. Some people complain about “200K$ degrees in medieval poetry with zero job market value”. I don’t know whether this complaint is valid, I would like to see how well would they do if they were not cross-financed from STEM, so to speak.
A counter-argument is that it is very cool if people study some things because of a job market value, and some other things because they are interested in them. The most extreme example I know, although in Europe so it was “free”, is a guy who studied egyptology and finance. Of course he works in finance. Another guy who studied the Summerian language used to work as a martial arts trainer. He assumed he will never use that degree. The America invaded Iraq, and collected a lot of clay tablets, stored at the Uni Chicago, and called up every Summerian language expert in the world to help translate them. This was not very publicized in the media, of course, as it kinda theft, but yes, this guy was contacted though his professors to go translate it.
Again, specific examples would help me understand what you are pointing at.
You seem really resistant against being specific. If I can't guess the name of the person you are hinting at... does that make you feel smarter? Enjoy the feeling! I just don't think it is helpful at communicating clearly.
Do you have specific examples?
You said "STEM research", and then you gave examples about IQ research, and homosexuality research, which I believe do not belong to STEM.
(By the way, I find it possible that homosexuality works differently in women and in men, with women being more "flexible" depending on the environment. Of course that would not explain why the reported answers have changed for lesbians.)
Critics of universities tend to have a range of complaints.
Some of them complain that too much of the scholarly work done at universities is too useless. Very little academic research ever produces anything that can be made use of outside the academy. Most academic papers receive few citations, and are mostly used mainly for academic CV padding.
Others dislike the use of college education to qualify people for whitecollar professional jobs. The utility of the material taught varies widely between courses of study. Some people in engineering and accounting can point to things they learned in their courses that they use every day in their work. But for others, an undergraduate degree is just a very long and very expensive test of general intelligence and diligence, and they resent having been required to go through it to get to their actual professional work.
Finally, university faculties lean very far left, politically. This tends to alienate conservatives, who are unhappy to have to go through institutions where the staff believe they are at best wrong, and probably either soft in the head or morally deplorable.
> Some of them complain that too much of the scholarly work done at universities is too useless. Very little academic research ever produces anything that can be made use of outside the academy. Most academic papers receive few citations, and are mostly used mainly for academic CV padding.
I think this is untrue. Or at least, is not empirically rigorous. Why is 'papers' the relevant measure, instead of like, researchers? If a PhD at MIT publishes a computer vision paper that gets like 10 citations, and then goes to google and builds google lens, I think it's silly to say 'well you could just get rid of MIT!'
I think this is the biggest sticking point I have on this issue -- people have this vague sense that universities are wasteful, or something, but then if you poke at that its all coming from shitty biased news sources that purposely highlight the most egregious cases while conveniently ignoring all of the very vital stuff that keeps America on top.
Here's a new measure: sum up the salaries of every PhD that decided to go into industry. That is probably a better measure of the value of the university research system. Off the top of my head, starting salary at Google for a PhD researcher was like $350k.
"But for others, an undergraduate degree is just a very long and very expensive test of general intelligence and diligence, and they resent having been required to go through it to get to their actual professional work."
These Caplanite arguments completely miss the point that many countries do not ban IQ tests. Now diligence is a better argument, because conscientousness is hard to test in a non-cheatable way.
Look in my first job, after a few months, as the manager was on sick leave, I ended up writing a 80-page sales offer full with all kinds of technical specs myself. It was surprisingly similar to writing a college paper! So either they really taught me skills like that, or tested those skills. It goes way beyond IQ, I know high IQ people who can't write for shit, seriously, they are super cow-rotators but for example lacking in vocabulary.
I have this feeling that because some people understate IQ, others overstate it, kind of applying that if you have IQ, learning is not important. This is not so. You need good genetics to run fast, but you also certainly need a lot of training. It is like that. It is truly two extremes, one is like saying a good trainer can turn anyone with cerebral palsy a good runner, the other extreme says if you have good genetics, you do not need a running trainer.
> These Caplanite arguments completely miss the point that many countries do not ban IQ tests.
That is a great argument! Surprisingly absent in many debates.
> So either they really taught me skills like that, or tested those skills.
Well, that's the problem. Caplan is on the side of "20% taught, 80% tested". But most people don't even think about this, and assume that it is "100% taught".
At least in my offline bubble, most people unthinkingly treat "intelligence" and "education" as synonyms. It makes perfect sense if you are a blank-slatist, and it's a perspective that most schools are happy to promote, because for private schools that's basically their sales pitch, and even for public schools, it's something that is supposed to give their teachers social status.
(This is a reason why I strongly support separating teaching from testing. If people have great writing skills before school teaches them, there should be a way to prove it. Not just for them, but for our general understanding how education works. Well, exceptional people already have ways to prove it, for example by winning a writing competition, but that is not systematic, and excludes those who are above average but not exceptional; so you can't figure out the exact proportion "how much school teaches good writing".)
> I have this feeling that because some people understate IQ, others overstate it, kind of applying that if you have IQ, learning is not important.
Even worse, for every person who says "I have IQ, so I don't have to learn stuff", there is another person who points as him (it's usually a "him") and says "here is a textbook example why IQ doesn't mean anything".
Mensa did a disservice to all intelligent people by picking the most dysfunctional people with high IQ, and associating the idea of high IQ with them. You should either test the entire population (like the Americans do with SAT) or not at all; only testing the people who otherwise fail at life is the worst option.
> You need good genetics to run fast, but you also certainly need a lot of training.
My naive 18-old self expected that Mensa would be the place that provides the training. (I mean, in a world with limited resources, it makes sense to provide the training to those who have the genetics.) Obviously, I was disappointed.
https://www.cremieux.xyz/p/mensa-the-above-average-iq-society
wait, why are you saying mensa is testing people who otherwise fail at life? for us, they simply held a workshop in our high school, and offered a test to everybody. but yes, I could say that those who took it looked kinda unpopular losers, because they were sort of desperate about gaining some status in someone’s eyes, and everybody else just did not care about such a piece of paper.
this is noticable on the Internet, too, it is typically otherwise-losers talk a lot about IQ. For example James Woods is said to test at 190, but he is simply not interested in it, he is interested in making films and conservative politics. notice how much of his output just does not even sound very smart, simply because that kind of thing is not so g-loaded.
but it is not entirely so, they told us about Special Interest Groups. now I think an entirely non-loser could also think that if a high-IQ SIG conincides with your chosen career, that is not a bad thing, it can get useful in all kinds of ways.
note that the mensa presenters themselves did not look losers. they had a great sense of humor and could talk engagingly. obviously they specifically selected the coolest members to be presenters, so it says nothing about the majority.
It is possible that there are significant differences between Mensa chapters in various countries, and that my description is too specific for Slovakia. I don't think that my country is too unusual, because when I read some experiences from other countries on internet, they often sound similar. But maybe it works better where you are.
> for us, they simply held a workshop in our high school, and offered a test to everybody.
Okay, that is way more active approach than here.
> they were sort of desperate about gaining some status in someone’s eyes
Yeah, "status" feels like the right word. Mensa gives you status for... basically, the way you were born.
That is not completely unprecedented. For example, attractive people also get status for the way nature made them. Also, success in any human endeavor requires a component of genetic luck: you can't be great at sport without having a superior body, great at science without having a superior mind, etc.
But most of these things have a component of luck and a component of work. Sport requires the lucky body *and* lots of hard work. Science requires the lucky mind *and* lots of hard work. Even the pretty girls take some care at making up, dressing nice, and not getting fat. Mensa requires the lucky mind and... zero work.
So it is naturally attractive to people who want to do zero work and get respected anyway. Which is quite tragic, because high IQ gives you a *multiplier* on things you do, so you could get actual respect for actual achievements while spending less effort than your less gifted neighbors... but many people seem not to care about this.
> they told us about Special Interest Groups
Once I had a hope about changing Mensa from inside, by starting a "rationality SIG" or something like that. Anyone on Less Wrong or ACX could easily pass the Mensa tests (just my estimate, but when I compare the ACX discussions with Mensa meetups, most Mensa members seem almost retarded), so we could coordinate to become regular members, start a new SIG, and then promoting rationality / education / science / actually using your brain for some meaningful purpose would become an official Mensa activity.
I don't think we could change the existing Mensa members, but perhaps we could have an impact on some new ones. (When Mensa does testing, I see a few interesting people on the following meetups. Most of them never come back once they see what Mensa is actually about.) But I can't do it alone, because it would seem like "one weird grumpy old guy's effort". But three or four of us, we could simply have a mini ACX meetup, and say to the newcomes "of course we are a bit different from the rest of Mensa, that's what makes us a cool SIG". Unfortunately, not enough rationalists around here.
There are a few SIGs here, but I don't think any of them actually does any activities, except for a "tourism SIG" (which is cool, but not really something I need Mensa for). Most of them are just something like "a list of Mensa members who are kinda interested in e.g. astronomy, but don't do anything special about it".
Did the first three hits help?
If not, https://en.wikipedia.org/wiki/Bryan_Caplan#The_Case_Against_Education
>Some of them complain that too much of the scholarly work done at universities is too useless.
That seems to be a prime example of "throwing the baby out with the bath water". What other mechanism is proposed to generate the useful stuff? I believe there is an argument to be made that the useless stuff is inevitable; it's called research precisely because you don't know what you'll be getting in the end, let alone if it's going to be useful, but it's clear that occasionally there's a diamond among all the dirt. If you stop digging altogether, no more diamonds for anyone.
I think the complaint about useless work isn't referring to "useless" in the sense your comment describes. Your sense is apparently research where, to find the Really Valuable Thing, we have a million places to look, and so on average we get research saying half a million places on average did not contain Really Valuable Thing and are thus considered useless. I've heard this for decades, implied in "ninety percent of all research is useless". But most people understand that there exist tough problems that require looking in a lot of places, some of which will be wasted - but you can't know that until you look.
The complaint's sense is more like "any midwit can tell this RVT will not be anywhere in these million places, so ninety percent is actually a hundred and I could halt the program, lose nothing, and save millions", plus "any midwit can tell that the RVT you're looking for is only valuable for justifying a wealth transfer, without creating any wealth, and the premises are subjective and therefore the whole thing is useless", with a side order of "this one field over here is based on so many false premises that it would have been shut down if not for a few ideologues defending it".
The latter sense of "useless" is debatable, but at any rate, the former sense is not really being challenged this round.
>Your sense is apparently research where, to find the Really Valuable Thing, we have a million places to look, and so on average we get research saying half a million places on average did not contain Really Valuable Thing and are thus considered useless
That is in the sense that the commenter I replied to meant it too, apparently:
>Some of them complain that too much of the scholarly work done at universities is too useless.
Ogre did not talk about 100% useless stuff, but some number smaller than 100%.
Regarding 100% useless stuff: Even assuming for the sake of argument that it exists and "any midwit" could reliably detect it, it still can't be more than a rounding error in the grand scheme of things. Like, what are we even talking about? A few post-grads musing about gender studies, requiring pencil and paper but not erasers, that kinda thing? Throwing whole universities under the bus because of that is so short-sighted overkill that ulterior motives having nothing to do with saving money start to look more likely.
The commenter you're quoting is Johan Larson, not Ogre. And Johan supported that claim as follows:
"Very little academic research ever produces anything that can be made use of outside the academy. Most academic papers receive few citations, and are mostly used mainly for academic CV padding."
I think this is more consistent with "any midwit can tell this RVT will not be anywhere in these million places" than it is with "to find this RVT, we have to search these million places, even if most of them won't have it".
And for the former, the central example seems to be research that looks like complete stabs in the dark, like how much pressure penguins build up while pooping, or whether people dress appropriately for colder weather (to cite two examples I pulled up at random).
As you say, such studies might be a rounding error. OTOH, it seems not at all hard to refuse funding them, given that there's presumably a formal application process, so a discovery that such studies even exist is a strong signal that someone isn't doing their job, which (1) raises questions about how much funding that person has funneled to useless research that we haven't found out yet and (2) supports finding an alternative process that removes that point of failure.
> OTOH, it seems not at all hard to refuse funding them
I think this take misunderstands how grant writing works. For the most part grants are earmarked. If some lab is studying 'how much pressure penguins build up while pooping', its because someone somewhere is funding the research through a grant. In point of fact, its most likely the US government that is funding that research, since the US government funds ~60% of all research in the US done in universities and is the single largest funder of research by an extremely large margin.
It's not like the *university* is setting out the research targets. There's no admin that's like "TODAY, PENGUIN POOP!"
Even beyond the lack of understanding of the grant process, I think this also is just like a massive Chinese Robber fallacy. There are something like 500000 papers published in the US each year. Even if you found 500 papers that you thought were *really* egregious, you would have no point at all.
Which brings me back to the original point: is all of the animus just fueled by reasoning like this? Just people who have no idea what's going on, and are therefore willing to foot-gun themselves?
I have a thought here. Scholarly work at universities counts as apprenticeship if your plan is to do scholarly work. I mean if you really want to be an academic historian or academic biology researcher, practically everything of that is ideal. The problem is that people want to work in business, and yet the university is apprenticeship in scholarship. Not apprenticeship in business. Could we fix that? If businesses are not offering apprenticeship for various reasons, could much of universities be turned into business simulations?
Only doctoral programs are really apprenticeships. Undergraduate and masters education is still mostly coursework.
My own position is that the existence of "any college degree" as a significant qualification is an opportunity. Employers are eager to identify capable entry-level employees. These people don't need to actually know anything specific, but they need to be literate and numerate to a high standard, diligent and reliable. The best available tool for identifying such people right now is the undergraduate degree, which is why these "any college degree" jobs exist.
The problem is that the undergraduate degree takes four years and can easily cost six figures, particularly once living costs are included. I would like to find something as good at indicating general ability, but cheaper and faster.
My proposal for doing so is in two parts. First, introduce a track or course sequence in high school that is demanding enough that completing it is actually impressive. Call it something like the First Class High School Diploma, and gear it high enough that only 10 or so percent of graduates are able to get one. And have common testing, not done by the teachers, to ensure uniform standards of grading. I suspect some employers would be eager to hire these people right out of high school, if only they had a reliable indication of quality.
Second, as an alternative to an undergraduate degree, introduce funded whitecollar apprenticeships. These might include some amount of technical or business course work, but would have participants start working with employers sooner, and doing useful work sooner. I expect this sort of program would be of interest to some of the more capabe graduates who are more business minded and less intellectual. (Note that I said "less intellectual", not "less smart". Not everyone who is sharp is interested in the sort of knowledge-for-the-sake-of-knowledge study that undergraduate degrees make so much of.)
It is my understanding that something like this is already being done in Switzerland. There, a far smaller portion of graduates go to university, and these white collar apprenticeships are the standard way into white collar jobs that don't require formal degrees.
Do DARPA funded projects have a higher success rate? I thought they were also very much in the venture capital mold of funding dozens of things in order to get a few big hits.
18% of DARPA funding goes to universities! DARPA sends more money to universities than it does to federal research labs, non-profits, and foreign entities of any kind *combined*. Universities are the second biggest recipient of DARPA funding behind industry, which is a whopping 62% of the DARPA budget.
Except...a lot of the people who are getting darpa funding in industry are also PhDs / postdocs / professors who previously were trained in universities and dependent on other kinds of US funding. So all those industry labs also depend on universities!
The same percentages are roughly true across DoD, which funnels roughly 20% of its budget to universities
Realizing I have no idea what point you're trying to argue
So? Different goals, different requirements. Yes, they're doing research and have invented important stuff that turned out to have civilian applications in addition to the primary military ones, but it's still more limited and goal-oriented than a general university. DARPA doesn't seem like the kind of institute where you can research the "unknown unknowns", or just do the boring but still necessary tasks like collecting and analyzing economic/political/ethnographic data and so on. That is the kind of institution that's under assault without a clearly better replacement in sight.
DARPA is a funding mechanism, sometimes it funds research in Federal labs and sometimes in research universities. I guess you could simultaneously stand up huge Federal labs while starving the research institutions in the hope everything will work out, but it seems like a lousy idea.
Can anyone provide a counter example? A state which has abolished research universities but continued to have significant research?
Bryan Caplan wrote a post recently called “let them be Hillsdale” arguing that it would be better to destroy institutions or put universities under strict ideological policing to force them to stop being woke, but Hillsdale isn’t a research powerhouse and I honestly don’t understand his argument.
> That doesn't mean we need to use universities to do it
Our existing research pipelines are empirically extremely strong. I guess you're right that we dont *need* to use universities, but Chesterton has a pretty big fence. Destroying our research pipelines and figuring it out later seems really dumb to me.
I think there's also likely good reasons that the universities do this work. We want smart people to do research. Smart people congregate in universities. Therefore we should fund universities to do research.
(Also the mRNA research was mostly NIH, DARPA came in later to fund moderna specifically. There's a ton of other examples, a lot of drug discovery comes out of NIH, and Dept of Energy also does grants for e.g. But, yes, a lot of funding comes out of defense, a solid chunk of which also goes to universities)
Weird Tales was a pulp magazine that in its day published fantasy and science fiction stories by writers who would later be famous, such as H.P. Lovecraft (Cthulhu) and R.A. Howard (Conan). It began publication in 1923.
If we went back one hundred years to 1925 and submitted a story to Weird Tales magazine that accurately described our world of 2025, what part of the setting would the editor consider most unbelievable?
Being able to reference all of human knowledge, at any time, on a device everyone has with them does not make people behave more rationally.
The whole rise of computers until there are computers in all sorts of unlikely devices, might seem strange.
"In John's kitchen, there were dozen appliances powered by electricity so cheap that even when they were not used, at least each of them spent some power displaying the exact time. Well, not exactly the same time... the clock on the fridge currently showed 14:02, but the clock on the freezer disagreed and showed 14:06. The clocks on the washing machine, dish-washing machine, stove, and the oven showed 14:04, 14:07, 14:01, and 14:05 respectively. There were many smaller clocks that John didn't bother to check. He knew it was a little past 2PM, and for this moment that knowledge sufficed. He expected the next generation of the appliances to connect across the world using satellites orbiting so high in the sky that it made them invisible, to coordinate on the exact time. The power necessary to achieve this noble purpose would not be a concern for the average person."
Actually computer full stop if we are talking 1925. It would be interesting to see what was in the science fiction stories of the day back then about calculating machines and how ahead of the curve authors were about future computers. I never read Lensman, I wonder if it had computers.
From what I remember the much later Foundation had gargantuan room size computing machines.
EDIT: Wow, Lensman is much later than I thought, never mind that point.
In 1925 no one had even made the conceptual leap to understand that there could be an idea of general purpose computing. That comes in 1936.
I think the state of the art in data processing at the time was the use of punched cards, processed using elaborate electromechanical equipment. There was also some use of analog computing devices, for calculating things like ranges for naval gunnery.
Not sure if I’m using this right, but I remember Scott’s Sadly Porn review describing therapy lines as “koans” - not meant to be true or false, just something you sort of live with until it breaks you open. Ended up writing this essay about cringe business clichés and how they might function like that - non-propositional, sincerity-by-force, short-circuiting irony.
Does that track with how koans work? Or am I stretching it too far? Would love thoughts.
https://substack.com/@waterofmarch/p-169698769
The LM weirdness I’m currently working on …
In back translation, you take a piece of text from your training corpus and get an LLM to come up with a question to which the training text is the answer. It helps if the system prompt instructing the LLM to do back translation is in the same language as the training corpus. Fine. Except that when the training corpus is in Ancient Greek, there’s not really a suitable Ancient Greek word to use for a LLM in the system prompt. Some discussion with DeepSeek R1 ensued, about the tripods of Hephaestus in book 18 of the Iliad, and the statues of Daedalus in Aristotle’s _Politics_ and Plato’s _Meno_. Fine. I can coin a neologism that an LLM will know what it means, even if Aristotle would have been deeply confused by it. Ψυχὴ Δαιδαλική It is.
>In back translation, you take a piece of text from your training corpus and get an LLM to come up with a question to which the training text is the answer.
Reminds me of the "Fortune Presents Questions for Famous Answers" from the Unix fortune cookie command line tool. My favorites were:
Answer: Go west, young man.
Question: What do wabbits do when they get tired of wunning around?
---
Answer: Dr. Livingston I Presume?
Question: What is Dr. Presume's full name?
---
Answer: The Royal Canadian Mounted Police
Question: What is the greatest achievement in the history of taxidermy?
Reminds me of Carnac the Magnificent. https://www.youtube.com/watch?v=lRTtLvKAKgk
P.S. R1 objects to being referred to in Ancient Greek in a way that would imply it’s a slave.
Interesting choice.
Yes, I know, in _Meno_, Socrates and Meno get a slave to verify a mathematical proof, and that bit in Aristotole’s _Politcs_ is about how they could abolish slavery if they could somehow automate a weaving loom….
How reliable do people find Wikipedia, specifically in terms of political bias?
I saw a recent complaint about the page on Mao not being critical enough. But Marxist's also complain a lot about bias on Wikipedia from the other side, presumably both complaints can't be true. A lot of commenters said they didn't have much trust in Wikipedia in general for anything relating to politics.
Personally it seems like it stays factual and impartial for the most part, and have used it a fair amount. I thought the Mao article was fine.
I'm asking about it's reliability on social issues in general not specifically on Maoism.
Surely both complaints can be true. There's no contradiction between "Wikipedia is biased against me, a Marxist who thinks the kulaks deserved what they got" and "Wikipedia is a lot softer on communism than it is on fascism".
Having tried to be rigorously factual in an edit of a famous true crime case has made me very leery of Wikipedia for controversial topics. You are essentially powerless in the face of editors with history of enough edits. Appeals to arbitration by uninterested third parties may get no response. The result will be whatever the consensus was before reform efforts.
Mostly reliable on uncontroversial issues, e.g. technology or obscure pop culture stuff, but check the references if it's at all important.
Quite unreliable on anything politically controversial, unless you also check the talk pages (and there may be many of those) to see what's been excluded. Even if the facts alleged in the article are all true (and don't count on even that), you have to assume they've been cherrypicked to support a particular narrative that may or may not be aligned with truth.
Yep, the talk pages are often the place that keeps the records of what was edited out of the article. Therefore, if I see a biased article, I think the best way to fix it would be to write a concise explanation on the talk page. Because if you fix the article, some mighty editor could revert it with a single click. But if you explain the issue at the talk page, future editors will see it, and some impartial experienced editor may volunteer to fix the page and win the edit wars. Also, according to the Wikipedia rules, you have the excuse of "I may have a conflict of interest, so I didn't want to edit the article directly", which can make you sympathetic to the other editors.
Very reliable, with enormous caveats.
Wikipedia relatively seldom gets facts wrong, but often selects and frames facts in a highly biased way.
You can usually trust it for narrow factual claims, except on the most politically-charged issues, but not for opinions, explanations or judgements.
That's my understanding as well. I will just point out Scott's extremely relevant exploration of the news media's use of the same tactics: https://www.astralcodexten.com/p/the-media-very-rarely-lies
Wikipedia has been the internet for me, more than anything else. And I liked being able to donate a small sum from time to time: the ask was so small compared to my looking things up on it.
So I was mainly disappointed to learn that they didn’t actually need my five or ten bucks for Wikipedia, but rather for causes dear to the hearts of the people who founded Wikipedia, I guess.
I see no link there. I guess ultimately if they can’t fund their causes, they might take Wikipedia away. I’ll go back to the Britannica.
Ehhh....that specific criticism seems to be more heat than light.
The Wikipedia Foundation being a registered nonprofit, its financials are public record. For 2024 a bit less than 60 percent of grants made by it went to directly Wikipedia websites (the various different language versions of Wikipedia), for "ongoing engineering improvements, product development, design and research, and legal support." The other annual grant dollars go to grants for "the Wikipedia communities", supporting "projects, trainings, tools to augment contributor capacity, and support for the legal defense of editors."
The Wikipedia Foundation though is not just a grantmaker, it is primarily the entity that pays the salaries and other expenses of all the Wikipedias. Only about 15 percent of its annual expenses are the grants it makes, and as noted above 60 percent of the grants are directly to the various Wikipedias. So even if you view all of the remaining 40 percent of grants as for "causes dear to the hearts of the people who founded Wikipedia", that amounts to only around 6 percent of total annual outflows.
Wikipedia's volunteer editors have beefs about the Wikipedia Foundation, which you can read about here:
https://slate.com/technology/2022/12/wikipedia-wikimedia-foundation-donate.html
That doesn't seem to be about any Wikipedia donation dollars going to the founders' pet causes though, and in the audited financials I don't see any evidence of that.
Starting in 2016 the Wikipedia Foundation made a strategic decision to start building a separate in-perpetuity endowment rather than just rely on annual contributions forever. That endowment, also registered and governed as a US 501(c)(3) not for profit, had as of last year grown to $144 million. Like all permanent endowments it is funded by donors who explicitly restrict their donation dollars to that fund (that being the only way that an endowment fund can be genuinely permanent), and like all such endowments it builds itself almost entirely with relatively large gifts as well as bequests in wills. Point being that if you are a regular recurring individual donor to Wikipedia none of your dollars are ever going to the Wikipedia Endowment. (You could choose to donate to the Wikipedia Endowment in any amount but you'd have to have explicitly chosen to do so.)
The Wikipedia Endowment began making grants in 2023 and you can read that list here:
https://diff.wikimedia.org/2023/04/13/launching-the-first-grants-from-the-wikimedia-endowment-to-support-technical-innovation-in-wikimedia-projects/
The strong fact-based criticism is that the Wikipedia Foundation now fundraises much more than is actually needed to operate Wikipedia (the various different-language wikipedias). That is true. Wikipedia's leadership, i.e. the somewhat-overlapping governing boards of the Wikipedia Foundation and of the Wikipedia Endowment, don't deny this. They say basically that they are raising money to make the continued healthy existence of Wikipedia stronger than any given year's fundraising. That means both investing grant dollars into design and research and one-off improvements, and building a "corpus" (endowment) such that Wikipedia at some point is fully independently "endowed".
My personal read of the financials would be that they are already at or pretty close to that second goal, and were I on one of those boards I'd be asking when the fundraising effort declares victory and leaves the field. That's just one outside view though, YMMV, etc.
At a minimum they need to be, and from some things I've read are to some degree, listening to the pushback that their fundraising appeals need to stop giving people the impression that Wikipedia is on a knife's edge financially. That is definitely no longer true precisely because they appear to have done a strong job of making Wikipedia financially independent. As a user who's very glad that Wikipedia exists I am glad that they have achieved that objective and that Wikipedia therefore isn't at risk of becoming dependent on public funding or on any single major private donor, etc.
You're calling it the "Wikipedia Foundation", but it's Wikimedia. I think this might be more than just a nitpick if it's masking from you the true sprawling nature of what they do. I notice you said "went to directly Wikipedia websites (the various different language versions of Wikipedia)"; I think that should be "Wikimedia websites" and "(all sorts of things that are not Wikipedia)" respectively. For instance, are you aware of https://www.wikifunctions.org/wiki/Wikifunctions:Main_Page ? It got millions in funding, and has always sounded to me like a huge boondoggle.
Separately, their fundraising misbehavior is much worse than just making the situation sound more than it is. They steadily ramped the urgency of the messaging up over several years as the actual financial need was ramping down. There is no reasonable interpretation other than empire-building cash grab.
The wikipedia/wikimedia thing is just a repeated typo on my part.
"There is no reasonable interpretation other than empire-building cash grab." You are clearly unfamiliar with the social dynamics of non-profit fundraising....no bad faith is required in order to explain the behavior which I and then you each summarized.
As for the Wikifunctions initiative, I've heard of it. Have not interacted with it and don't know much more beyond the basic idea. It is an example of something which has received funding from the Wikimedia Endowment as distinct from the Wikimedia Foundation's annual fundraising.
That makes sense since it is a new idea, a startup project (launched at the end of 2023). General theory/practice in the world of professional NGO management is for endowment funds, and not annual fundraising for operations, to fund new initiatives from which the ultimate payoff (the NGO varieties of that word, so usefulness, impact, influence, etc) can't be specifically known yet. I.e. endowment funds as sort of the NGO version of venture capital (an analogy I've heard made in my professional contexts many times over the years).
In that spirit it would be a head-scratcher to conclude that a 20-month old initiative is already proven to be a "boondoggle". It may turn out to be of course, just as a large fraction of actual venture-capital investments end up failing. That risk goes with the territory.
>The wikipedia/wikimedia thing is just a repeated typo on my part.
Once is happenstance, twice is coincidence, the tenth time, you wrote multiple paragraphs positioning yourself as knowledgeable when you don't even know the name of the organization in question.
>You are clearly unfamiliar with the social dynamics of non-profit fundraising....no bad faith is required in order to explain the behavior
I understand that it's embarrassing to overreach and get exposed, and firing back with some heavy-duty condescension in that case might feel pretty good, but it is not healthy or effective rhetoric. And, are you suggesting that it's normal and good for a non-profits to eternally seek to control ever larger resource pools, even after they have more than enough to fund their stated mission forever?
"Endowment vs fundraising" does not matter. That's a question of internal accounting structure. The reality that is relevant to the rest of the world is that they're an organization with income, expenses, and a cash reserve. The point is they've been instilling a steadily increasing fear in people that they're having trouble meeting the expenses of their core function, so much so that the service is on the brink of going down. Obviously the endowment capital would be in play if that was the real situation, so it's not valid to say "well the endowment isn't expected to fund core operations, so it doesn't matter what they do with it". (Actually, even if it was valid, it's still a problem, because the vast majority of donors are intending to fund Wikipedia itself and nothing else! It's not great that they're wasteful, but the real sin is the dishonesty).
I promise you, anyone who has been consistently exposed to the past decade+ of their fundraising, and knows they were actually financially fine, is shaking their head in disbelief that you're defending them like this - doubly so if they got conned into donating.
I read a Twitter thread about this subject years ago, but I am not a member Twitter so I can’t read that anymore. I recall a discussion that followed suggesting that perhaps half of your donations, went to something other than running the website. It could be a much smaller percentage than that though, and I would feel like I was being manipulated. If I donate to the national wildlife Federation, I don’t mind what they spend on overhead. I understand that’s part of running a nonprofit, but I would not be happy if I learned that they turned around and donated my donation to the ACLU.
We obviously see this differently and I I’m definitely a free rider on Wikipedia on your dime now. Or on two or three pennies of your dime.
https://xcancel.com/echetus/status/1579776106034757633#m
Thank you. I feel less crazy.
I also thought I'd heard of the Wikimedia Foundation making sizeable grants to things unrelated to Wikipedia. But going through a couple years' worth of audited financials and their annual reports (which list major grants) didn't turn up any such examples.
It remains possible that the WIkimedia Foundation did some of that prior to 2022 which is how far back I went. The Wikimedia Endowment's grantmaking, which began in 2023, is restricted to "support[ing] the technical innovation of Wikipedia and the Wikimedia projects and their growth for the future".
I think most pages are shaped by people who are very invested in that particular subject. Woodgrains thinks there’s a pro-Mao conspiracy but there’s really a million pro-this-subject conspiracies out there.
Also, what IS the proper degree of criticism to express towards Mao? How can there possibly be a single answer to that?
Wikipedia is significantly above average and a relatively good source on social issues. It has well understood political biases, that I don't like, but it certainly outperforms peer organizations. I don't trust it to report the truth and I trust it substantially more than the NYT or a variety of other journalistic outlets and, frankly, better than a number of meta-analysis...meta-analysises....meta analysi...in published journals.
For example, the wikipedia article on Biology and Sexual Orientation:
https://en.wikipedia.org/wiki/Biology_and_sexual_orientation
Specifically, the section on twin studies. The core issue is that we still don't really know why people are/become homosexual. We know it's not purely genetic, because we've got twin studies. We find a homosexual with an identical twin, we go talk to the identical twin, and most of the time the identical twin is not also homosexual (when I dug into this, I was seeing concordance rates of 50%, they're reporting concordance rates as low as 25%, which seems weird). This is an ongoing area of confusion, because there's clearly something going on genetically, 25%/50% concordance is still way higher than the base rate of homosexuality in the general population, but it's really hard to determine what the other factors are, or most likely, what the gene-environmental interactions that determine homosexuality are. (1)
Touchy subject, right? And if you read the Wikipedia article, you'll see them downplaying this a lot for fairly obvious political reasons. And that's bad. But...man, that article is a lot more honest than 99% of media I've consumed on this topic. It's better than I've gotten from personal conversations with experts in this area.
I think a lot of conservatives and centrists have well founded complaints and issues about political bias in a lot of media and especially in "factual" or "research" entities. And Wikipedia is certainly guilty of a lot of those sins. But it's one of the better actors, not one of the worse. And I want to celebrate the best of the "other" side, not criticize.
On the scale of general actors:
Wikipedia: Well known bias.
NYT/High status academic publication (Harvard): Bad but might be something interesting/useful in any given article.
Vox/99% of media/reference: Absolute dumpster fire, deserving of a 40k purge.
On that scale...I dunno, Wikipedia feels worth defending.
(1) As always, not my area of expertise, more than open to correction
+1 for trying to celebrate the best of one's intellectual adversaries. I really wish more folks did this.
As far as I can tell Wikipedia's most prominent bias is a tendency to recreate academic consensus. This has become more of a problem recently than it was when Wikipedia was founded, since academia has come to produce more outlandish and controversial claims since then. I think this bias might be a limitation of the site's "constitution"; Wikipedia seems meant to be no worse and no better than those sources of information society can agree are respectable and trustworthy. The consequences of this bias for political subjects should be obvious. And since it's also a bias you'll find in most respectable places it hardly makes Wikipedia an *unusually* bad source of information.
On the subtler controversies, I find that the academic consensus bias tends to show through a lot in "criticism" sections for books/theories/etc, which not only reiterate bias found elsewhere but also seem to have their own layer of filtering; e.g. a book known only to philosophers which claims that mind-independent objects exist seems more likely to grow a "criticism" section than a similarly obscure book that claims that nothing exists untouched by culture (and the latter will usually be focused on criticisms about how the touching happens rather than whether it happens).
On the unsubtle issues (like Maoism, but also nationalism, certain wars) I have seen some slow-boil edit wars about stuff it seems like nobody should care about, like the national anthems of long-defunct states. Sometimes the propaganda is obvious enough that you can easily infer the truth by negation, but the more I think about the weird stuff people do the less confident I am in my ability to notice all of it.
I don’t know - I spend a lot of time on paleontology Wikipedia and I realize that I actually have very little sense about what is the academic consensus on things like when life first appeared and things like that, because every relevant Wikipedia article always mentions every paper with an early claimed proof of life, but I have no idea which are considered consensus and which are one-offs that should be treated like just a single study.
> a tendency to recreate academic consensus
Or a journalist consensus, if academia does not care about the topic sufficiently.
One problem with "academic consensus" is that if there is exactly one published paper on the topic, then the paper *is* the consensus. The easiest way to achieve that is to invent a new word (see "TESCREAL").
What should they be repeating instead of academic consensus?
Wikipedia should only host truth, but if we're willing to lower our standards to things that could be systematically recognized and promoted by a Wikipedia-scale institution, I personally can't think of a good alternative that wouldn't be very close to academic consensus. There might be more accurate proxies for truth, but figuring out which is most accurate wouldn't be much easier than figuring out the truth.
Wikipedia is actually based on reliable sources, because truth is a tricky concept.
Where "reliable" means "says things Wikipedia admins agree with". On anything remotely controversial, the only way to get actual truth out of Wikipedia is to read the talk pages as well as the article, and take note of the sources the admins are excluding.
This can be more trouble than it's worth, but in that case it's not worth using Wikipedia and probably not worth seeking truth; stick with rational ignorance.
I think the first Talk page I ever looked at was for the Celtic harp. Or some other instrument that occasioned dispute between Scots and Irish. I realized the fun was on the Talk page.
Some of the trouble with academic consensus follows from simple incentive analysis: one can expect academic consensus to be biased on any issue that challenges the status of academia as authoritative.
One might think this is okay - the issues that challenge academic status should surely be quite small and dry affairs, having to do with historiography, literary provenance, semiotics, metaphysics, and other multi-syllabic terms unlikely to make it out of a library basement.
In reality, though, they reach into any issue that people care about enough to build an online identity around it. So: politics, religion, health, epidemiology, energy, environmentalism, evolution, race, and really, I could probably have stopped after the first two. If academic consensus only concerned itself with stuff relatively few people care about such as the mass of the Oort Cloud or which polysaccharides can be used to make fiber, or easily checkable stuff like the primality of some 50-digit integer, no one would question it. People question academic consensus precisely because it gets pulled in as an authority on claims that aren't trivial to check and that affect a great deal of policy.
This is currently a lot of issues! When it comes to such issues, academic consensus suddenly becomes very sensitive to which individuals are part of academia when a controversial, policy-driving claim turns up, and by extension, what those individuals happen to think, and how able they are to influence their fellow academics, either to change minds, debate publicly, or withhold funding or credit privately.
In such cases, a third party should trust academic consensus about as much as they would trust someone with a definite position on anything. Someone with no opinion on abortion ought to trust Planned Parenthood to thoroughly defend one position, but not all of them. Ditto the NRA on guns, Scalia on textualism, the Pope on the historicity of the Apostles, etc. A more complete picture requires consulting sources with incentive for thoroughness in other directions, and with comparable resources: National Right to Life, Handgun Control, William O. Bradley, Richard Dawkins.
Then comes the equally hard part of resolving conflicting claims from each source, establishing standards for said resolution, and so on.
For positions touching on academic consensus, one would have to turn to sources that make claims conflicting with academic consensus, and that also have comparable resources. The first condition is pretty easy to satisfy; the second is particularly hard. For an example, see the current state of physics surrounding string theory and its opponents. Humphrey Appleby is a physics professor, and a reader here: he could doubtless elaborate.
TracingWoodgrains wrote a good piece of investigative journalism about wikipedia: https://www.tracingwoodgrains.com/p/reliable-sources-how-wikipedia-admin
Funny - every time I'd medium-dive into a Wikipedia article that looked a bit off to me, and look at the Talk page and discover a hotbed of controversy, the primary source of frustration was typically some recognizable Wikipedia userID attached to a message that abruptly waved "WP:RS" (Reliable Sources) in the face of whomever raised the complaint. Then I'd read the WP:RS article and find it's a lot of text that looks comprehensive and reasonable at first glance, but turns out to be somewhat circular at second.
Thanks to TracingWoodgrains' *extensive* article, I find a lot of it is traceable to one David Gerard (and a lot of like-minded senior editors who weren't cowbirded off the site), and his background in dubious sites like RationalWiki and /r/sneerclub. Apparently an entire slice of where the world goes for authoritative truth on controversial subjects - Wikipedia - is functionally determined by "Reliable Sources", which is quietly defined as "Sources This One Guy Believes Are Reliable", and enforced by his apparently epic levels of obsessiveness.
If TW weren't comparably obsessive, I doubt I would have known this.
I'm still annoyed that somebody changed the article about Lord Dawson to avoid referring to his killing of King George V as a regicide.
That is such a Dorothy Sayers sentence! Have you read her mysteries? They're set in that era. Highly recommend them.
No, I'm not familiar with them. I'll look into them, thank you.
> But Marxist's also complain a lot about bias on Wikipedia from the other side, presumably both complaints can't be true.
Sure they can: different topic areas attract the interest of different influence campaigns. Just look at Eastern European history articles: you get serbian nationalism on some pages, bulgarian nationalism on others, polish nationalism on others yet, etc. That doesn't add up to overall reliability, it adds up to incoherence.
That quote is referring to the Mao article. Different pages having different biases has nothing to do with that quote. Some people complain the criticism of Mao in that article isn't harsh enough while others say it's too harsh.
And it’s quite possible for some parts of each to be true! The article isn’t a single article written by a single human with a consistent bias throughout it - each part is written and edited by dozens of people, and there may well be biases in one direction in some parts, biases in another direction in a second part, biases in a third direction in another, with certain perspectives being underrepresented in all parts. There’s not only two sides to any of the relevant issues, and the article likely doesn’t lean towards the same side throughout.
Sure, there are differences in bias between article sections or even sentences. But they tend to be small, and it would be unusual to see an article swing from an obvious pro-Mao bias to an obvious anti-Mao bias. For this to happen, you need editors on both sides inserting biased sections, but a lack of editors on either side willing to work to improve sections they disagree with. So both sides just leave the biased sections they disagree with alone (perhaps because they didn't bother reading other sections). This is more likely to happen on obscure pages.
If you have more active editors on the page, you can get disputes and edit warring instead, which may then trigger a discussion to reach consensus or a vote. Ideally, the end result of this is an unbiased article that everyone is somewhat happy with; realistically things won't be perfect and one faction will likely have more power in the dispute and the consensus will still be a little biased. This is how an article can have a consistent bias in one direction even if it is written by dozens of people with different biases.
That's a possibility, but I suspect that in different sections of the article, you just have different editors and moderators with knowledge of different parts. People who will recognize a bias on one set of information won't recognize a bias on description of other things they aren't as familiar with. No one's going to be familiar enough with all the parts of Mao's life to have that.
I don't really think that accounts for it. In my experience, a lot of Wikipedia editors are subject matter experts; often professors or PhDs. They've studied the life of Mao. They know their stuff.
I'm sure there are many subtle errors experts miss, as you say. However, if there is a subtle error or bias in a chapter of Mao's life the editors are less familiar with, it's also going to be too subtle for average ACX commenter and the general masses complaining about bias in Wikipedia. They are generally complaining about big picture stuff rather than the type of thing even a scholar of Mao would miss.
> They removed the part of the wikipedia article discussing the similarities between dengue and coronavirus, and why we would, a priori, expect the covid19 vaccine not to work (It's actual mechanism doesn't seem to be as a vaccine, in that memory b-cells aren't being triggered, and we don't seem to have a perpetual (2+ years) memory of the exposure.)
They probably removed it because it was biased and misleading or untrue. We do form lasting memory B cells after the vaccine, and the vaccine, while not 100% effective, does work.
If we were magically given a perfectly unbiased encyclopedia, everyone would still think it had biases. We would read along until we encountered a topic we are biased on and interpret it as a clear example of bias in the encyclopedia.
Short of making testable predictions, which is often not possible, I'm not sure there is any way to distinguish a bias in the encyclopedia from a bias in yourself.
I'm curious where you even heard that the vaccine doesn't cause production of memory B cells. I've seen a lot of COVID vaccine skeptics, but I hadn't heard that one before.
The point is that you're only showing there is a difference of opinion between you and Wikipedia. You have no way of knowing whether the bias lies with you or Wikipedia. From the inside, any bias just feels like you are unbiased and others are clearly biased.
https://www.cell.com/cell-reports/fulltext/S2211-1247(25)00269-4
COVID-19 vaccination induces durable B-cell responses in exposed and naive humans
Spike+ B cells gradually expand toward the recognition of the RBD subdomain
RBD+ B cells are associated with lower breakthrough incidence in naive individuals
COVID-19 recovered develop a stable SARS-CoV-2-reactive atypical B-cell pool
T cell immune memory after covid-19 and vaccination - PMC https://share.google/QCjuuIxCC1REDmn1V
Does anything like MetaMed (https://en.wikipedia.org/wiki/MetaMed) currently exist? I'm facing a tough medical decision, and experts seem to be split on what I should do. I have a tentative view after reading some of the literature, but I'd like someone experienced with thinking about these kinds of questions to look over the evidence
You might try this person. She does biomedical research. https://acesounderglass.com/hire-me/
Comes highly recommended by serveral people on here.
Samotsvety's site says they're "open to forecasting consulting requests" - no idea of the price, but MetaMed sounds like it was pretty fancy, so if you're wishing for that I guess you're willing to pay a good chunk. Only a couple of members are listed with medical experience, but then again any given problem they tackle will only have a couple of domain experts, right?
Or: feed it all (your situation, your options / possible outcomes, your opinions on all of that, your general life values, and all of those journal articles) into the best model of each of the major LLM providers. It might feel bad, but you can absolutely do worse talking to real doctors (who are themselves probably talking to the LLMs anyways). Probably the biggest obstacle you have is with deep comprehension of medical journal papers, right? Complex niche knowledge is where LLMs shine; they're like living textbooks. If it knows what you're looking for, it will make the relevant information accessible to you, and then you can think it over for yourself.
And, I certainly want to say that I hope it will go well for you.
I know a physician who runs a patient advocacy business to answer these types of questions. Feel free to email me for more info dgold114@gmail.com
I know a physician who runs a patient advocacy business to answer these types of questions. Feel free to email me for more info dgold114@gmail.com
I know a physician who runs a patient advocacy business to answer these types of questions. Feel free to email me for more info dgold114@gmail.com
I know a physician who runs a patient advocacy business to answer these types of questions. Feel free to email me for more info dgold114@gmail.com
Any counter-arguments to "it is worse if a zillion gazillion people are slightly inconvenienced than on person is horribly tortured" ?
Mine would be, you can only measure facts, not values. consider this: https://en.wikipedia.org/wiki/Th%C3%ADch_Qu%E1%BA%A3ng_%C4%90%E1%BB%A9c was it a good thing or a bad thing? On person set themselves on fire and died a horrible death, and it resulted in some level of political pressure on the South Vietnam government to stop persecuting Buddhists. Can you measure whether it was overally a good or a bad thing?
In some cases, like torture vs. discomfort, you can make an intuitive guess. But it is not a measurement, and cases like above show how it is not a measurement.
> Any counter-arguments to "it is worse if a zillion gazillion people are slightly inconvenienced than on person is horribly tortured" ?
As I see it, from the perspective of game theory this is equivalent to "it is worse if a person is slightly inconvenienced, than if a person is horribly tortured with probability 1 : zillion gazillion".
So anyone who disagrees with this should provide examples of inconveniences they are volunteering to experience in order to avoid horrible things with such small probabilities.
For example, any time you were impolite to someone on internet, there is a probability greater than 1 : zillion gazillion that your interaction was the last straw that made the person insane, which made them kidnap someone and torture them to death.
interesting! my main thing against utilitarianism is that you cannot measure value, you cannot put a number on them, it is basically opinions. so what you are saying is that what one can more or less guess-measure or put a number on is the probability that that opinion is correct? and then this is what you can then multiply? so when one trillion gazillion people have a speck in the eye, there will be one thousand people who are so sensitive that for them it is utterly horrible?
> my main thing against utilitarianism is that you cannot measure value, you cannot put a number on them
I understand the sentiment, but at the end of the day, you still have to make choices. You have one million dollars, and you can either build a new hospital or a new gallery. You can say that both health and art are of great importance, in a way that is incommensurable, but at the end of the day, you either build the hospital or you build the gallery, which from some perspective means making an implied value judgment about the supposedly incommensurable things.
Money (and attention, work, time) is the common resource that forces to make decisions as if you could put a number on things.
If you had a budget to fix thousands of flickering lights in the entire city, but you could redirect that money to instead cure one guy suffering from a chronic painful illness, that is probably the most realistic analogy to "dust specks vs torture" in real life. Would you always prioritize saving individuals in pain over fixing minor annoyances that annoy many?
But that is exactly it. The original Bentham, Mill and others who invented utilitarianism meant it rather explicitly a political philosophy meant for the government, not for the private individual. Because governments face choices like that. This is also why it treats people as interchangeable. Because for me as a private individual it is entirely okay to prioritize helping a friend over helping a strange, but for the government it is not allowed.
Utilitarianism is not well-optimized for private individuals, like basically every parent would rather save their own drowning child than two children of someone else. It is only the government who is not allowed to think like that. Who has to see everybody or at least every citizen as equally important, and has to budget with many trade-offs, and indeed exactly as you wrote, the government basically measures utility in money. They will not spend infinite money on saving one 95 years old cancer patient.
Sam Kriss had this interesting observation that Rationalists somehow talk like as if they had infinite power. Okay, let's be fair, if a Superintelligence happens, that will have a lot of power and will basically automatically become something like a super-government, so on that specifically, utilitarianism is justified. But all this worrying about shrimp welfare as a normal individual with little power...
> Sam Kriss had this interesting observation that Rationalists somehow talk like as if they had infinite power.
Well, I can have an opinion on right and wrong, even in situations where I don't have the power to actually do something meaningful about it. From outside, it may seem like playing king.
True enough, but understanding not having power should IMHO lead away from utilitarianism:
1) that it is okay to have simply opinions instead of trying to numberify everything
2) you know, the whole Bayes thing is for those situations where you want to figure out something entirely your own, like how Einstein figured out relativity. but it is also okay to just contribute ideas to the general discussion and eventually the hivemind of the public will figure it out
3) it is okay to care for some more than for others, basically because of closeness (EAs explicitly do not want that)
In your counterargument, it's not really a person being tortured for the benefit of others. It's a choice the guy voluntarily made to do to himself in an attempt to help others.
Here's a short story (5 pages) by author Ursula K LeGuin about a concept like you describe, it's pretty interesting:
https://shsdavisapes.pbworks.com/f/Omelas.pdf
Recovery time should be factored in. Those zillion gazillion people will all have gotten over their mild inconvenience in a couple of minutes. Not so much for the tortured, or the people who know about the torture.
Yes: questions like this are insufficiently coherent to yield a precise answer. In some sense morality, like consciousness, is a ret-conned fictional narrative and so there's no objective ground-truth to discover. Debating questions like this is like arguing about how lightsabers really work.
If you believe that a zillion gazillion people are going to be slightly inconvenienced, you're almost certainly just wrong. Your priors for a zillion gazillion people even *existing* should be super super low, and your priors for an effect that is powerful enough to exactly-slightly-inconvenience all of them should be even lower. There's no way that a standard human could live long enough to even see that many people, much less verify that they were slightly inconvenienced.
This problem then reduces to Pascal's Mugging.
Incidentally my objection to the trolley problem is similar. If you're in a situation where you believe that killing one person is the only possible way to save several people, then it's likely that you're wrong and there's a better option you haven't thought of.
There's a relevant post in the Sequences called "Ends Don't Justify Means (Among Humans)".
Interesting! My objection to the trolley problem is this: whoever had set up the situation is 1) obviously evil 2) obviously powerful. That is a situation like a platoon of SS is pointing guns at you, so that you can't just free those people. At that point, you simply do not have much moral responsibility either way.
The correct answer to the trolley problem is to stay well away from the switch (or the bridge with the fat guy or whatever), look for the guy with a clipboard who's trying not to look like he's watching the switch, and shoot him immediately. Five dead innocent victims against one Mad Scientist taken out of circulation? That's always going to be a net win.
Technically, I think he'd be a mad philosopher.
Now this is highly interesting! Why did Romans, who were not a particularly kindly people and had enough slaves, ban human sacrifice almost completely (I think there was one huge exception when it looked like they will be destroyed by Gauls) ? Or same for Greeks, Athenians had no problem massacring / enslaving the entire population of Melos simply because they wanted independence from Athens, but human sacrifice, nope? Or the Old Testament, dashing out the brains of infants of enemy populations is fine, but no human sacrifice, god even stops Abraham from doing it because it was just a test? I don't think these people were all that motivated by compassion or moral concerns. Why then?
Why did they not sacrifice at least some condemned murderers who would be executed anyway, often very brutally, like crucifixion, thinking how the Spartacus slave revolt were crucified, would not even a moral, compassionate person say that cutting their throats while praying to Jupiter would have been less bad? What was really happening, do you understand it?
My best guess is that people sometimes realize that some things must be made completely taboo, because if you allow some of it, you might get a lot of it. If sacrificing one convincted murderer for a better harvest is doable, maybe one day basically all people will be burning their firstborn children alive to Baal, like how some say it was done in Carthage. NRxers have this theory, that a virtue-signalling competition specifically in religious-holy virtue can absolutely spiral out of control. On the other hand if you just have a law that murderers and rebellious slaves are crucified, it will not spiral into crucifying innocent people.
Is there any solid evidence that GLP-1 agonists deliver health benefits that can’t be chalked up to the weight loss they cause? Every study I’ve found reports at least some weight loss alongside any benefit, and the one alcohol-use trial was negative.
Scott thought there probably were other benefits. Seems to me the best path to an answer is to ask one of the better AI’s
for evidence for and against, and ask it for links to sources. Check the ones that look first order (research) rather than articles in the media yammering in about research findings.
As I write this I’m realizing with some
uneasiness and regret that I now mostly go to GPT4.o with questions I would have asked on here up until a few months ago. I get answers 100% of the time with GPT. Here? Maybe 40% of the time. And there have been a good number of questions that went unanswered that I was sure some readers could have answered. I don’t think other readers have an obligation to answer, really, it just seems cold not to take the trouble to do it. Jeez, why not be prosocial here? You’re protected in all kinds of ways from being exploited.
So I guess now that I:GPT4.o::lonely guy: Ai sexbot. When it comes to getting answers to questions I am turning to AI in response to how little real and internet people have to give, on average. (In other areas of my life I have not yet felt to need to suck on a cyberxenomorph, though perhaps that’s coming.)
Well, I came here after no AI was able to give an answer to my question :(
Really? I asked ChatGPT your question and it gave me a pretty decent answer:
"Multiple large RCTs (e.g., LEADER, SUSTAIN-6, REWIND) show reduced major adverse cardiovascular events (MACE) in patients on GLP-1 RAs vs. placebo, even after adjusting for weight loss ... Several studies — including phase II trials like the Semaglutide in NASH study — have shown improvements in liver enzymes, steatosis, and even fibrosis stage in NASH patients. Some of this is clearly tied to fat loss, but liver-specific effects (e.g., reductions in hepatic inflammation markers and ballooning) appear disproportionately strong compared to weight-matched controls ... GLP-1 RAs seem to slow progression of albuminuria and preserve eGFR, beyond what you'd expect from weight loss or glycemic control alone. Again, some RCTs (e.g., LEADER) support this."
Anecdotally I have 2 friends who were pre-fatty-liver and their liver function improved soon after starting on GLP-1's, before significant weight loss occurred.
Its all hallucinated. Not a single study exists that controlled for weight loss. Its just AI hallucinations. You can check the studies it provided and see for yourself.
Really? Huh, ok.
Just asked GPT4.o and got a mountain of info supporting idea that they have benefit independent of weight loss. Quoted some RCT’s. Here is its response. However, you do have to go to the iinks it gives for the main articles and make sure it didn’t hallucinate them.
https://chatgpt.com/s/t_6889721c2e8881919fd574f1c38c45db
Scott’s old piece quotes studies, I’m pretty sure. Do you know the one I mean? Not the recent one about the gray market in those drugs, but an earlier one called something like “why does Ozempic cure everything?”
It's funny that Scott was pooh pooing the reality of supernatural entities due to their lack of mathematical knowledge in Universal Love, said the Cactus Person, but IRL, Ramanujan claimed to have gotten all his mind-boggling theorems from his family goddess appearing to him in dreams. Ramanujan was a mathematician of whom it has been said that the word "genius" utterly fails to capture his brilliance, and it is amazing that there was such an intrusion of extremely raw spirituality in a domain commonly perceived as very hard-nosed and rational (though is it really? I read a book by a mathematician arguing mathematics is really primarily about intuition).
Which book did you read about mathematics and intuition?
I saw on another thread someone recommend the book, "Mathematica: A Secret World of Intuition and Curiosity" by David Bessis. I read it and really appreciated it.
To state one of the main takeaways from the book, when you learn something so well it seems intuitive. But also there could be particular perspectives that makes certain things seem intuitive.
As for Ramanujan thinking his ideas came from a goddess, i.e., revelations. Descartes is actually interesting for this. I do not have the time now to delve into the details, but, from a quick ChatGPT summary of the main ideas regarding knowing something is true:
1. We discover truth through reasoning (thinking),
2. but the reliability of our reasoning, for Descartes, depends on God having created us with a mind capable of recognizing truth.
This is quite interesting, because I personally rely on feelings to "know" whether something is true or not. Contradiction and certainty evoke strong emotional responses in me.
Also, in terms of revelations, I once went to a poetry reading where a poet ascribed their poems to coming from God.
Chain of custody aside, in my experience, ideas just pop up in my mind. I often say that my brain does stuff and I take the credit for it.
In general, I am very interested in where insights come from, and how much control we may have over insights. A couple of years ago, I read the book, "Seeing What Others Don't" by Gary Klein. It was an entertaining and thought-provoking book.
If anyone has recommendations in that direction, I would love to hear.
Yeah, it was me who recommended that book, that's the book I read. I loved that bit about how learning to think in more than 3 dimensions is an embodied thing, and it's only when you intuitively understand what say, 8 dimensions mean, that you can do work in that domain.
I'll admit upfront, that I feel annoyed by the mystification of math and Ramanujan, but I'll try to explain without any annoyance.
Ramanujan is an important mathematician who was described as incredibly talented, but he wasn't be-all end-all of mathematical accomplishment. (Hardy, his fan, friend and mentor-type-of-guy thought Hilbert was greater still. Edit: this part is wrong, see below.) Other great mathematicians proved theorems as impressive as his without any participation of goddesses, so we know that the brain of at least some humans can do it without assistance. But there's something unassisted human brain can't do.
In Scott's story it's not that the entities "lack mathematical knowledge": it's that the character wants them to perform a calculation that is dull, but absurdly time consuming, a calculation no human brain (or even a computer!) could do quickly enough.
That is to say: if a person wakes up and says: "I goddess communicated to me this wondrous proof" this is astonishing but not definitive, because we know that some humans at least can come up with amazing proofs. If a person wakes up knowing how to factor a bonkers prime number, this is more convincing, because for sufficiently large primes, it may by that no human or machine can do it, at all.
You note: "[Ramanujan] put out a lot of conjectures that would be proven only after his death, and he did say the conjectures were coming from his goddess. If they were arrived at primarily through work, it wouldn't have fallen to others to prove them no?"
This is not that weird. There are lots of statements and conjectures that mathematicians think are almost certainly true, but aren't able to prove. In some cases someone gains a valid insight that one can't make totally rigorous, not enough for a real proof, but enough to make a very plausible guess. There are piles of unproven mathematical conjectures made on empirical basis. No goddess participation necessary. What was going on with Ramanujan was a mix of intuition and reluctance to write down proofs when he actually had them. Sometimes he probably had the proofs but didn't record them. Other times probably it was just a good hunch. Both would be totally normal (for prominent mathematician) and not unique to Ramanujan.
Is mathematics "very hard-nosed and rational" or "primarily about intuition"? Both, in places. There's no contradiction. The end results of mathematician's labor - proofs - are supposed to be perfectly rational and logical and should not rely on appeals to intuition. But the methods by which mathematicians arrive at proofs don't have to be logical at all. Intuition plays a big role in it. You are allowed to do drugs, talk to goddesses then roll on the floor for a while if it helps you (real things people did, though not all at once), as long as the proof is airtight in the end, it's all good.
You note: "I've heard the culture of mathematicians is singularly averse to finding practical applications".
You can find various prominent mathematicians saying things that sound that way, but I think in reality most people are less romantic and it's not that mathematicians hate the idea of practical applications, it's that most of the time they do work that seems needed for the development of some mathematical theory and don't have any idea what the practical applications of it might be, and can't really know it, because the distance between high-level theories and applications is so huge and tangled. The little Fermat theorem was proven in like, 17th century, and the practical application of it is that if hackers intercept your e-mails, they can't read them, and do you think Fermat could know it, could ever figure it out?
Most pure mathematicians get asked all their lives: "well, what are the applications of the thing you are working on?", and they have no idea the same way a lorry driver transporting sand from A to B can't have an idea what each individual grain of sand will be used for, and it gets on their nerves the 1000th time somebody asks that. So there's a reason to retreat to some artsy pose: "oh, it's so pure and lofty, I would never soil myself with the matters of practicality", but the real feelings are less theatrical.
But in that case... let's compare that proof with evidence given for a crime at court. They real or true aspect of crime is actually doing the deed. The evidence given at court is a complicated, circumspect, bureaucratic, fallible process, because you have to somehow convince the judge or jury this way or that way, because society must function somehow.
Can it be said the real math is the intuition, and the proof is just the bureaucracy around it, because science as a collective effort must function somehow?
I have read Heisenberg's Quantentheorie und Philosophie. He said either mathemathical or empirical proofs are basically just for other people, once you stumble upon an idea that is simple and beautiful i.e. elegant, you know it is true. The rest is basically a set of bureaucratic hoops to jump across for the sake of convincing other people.
Which suggests science is at some level non-rational, its the bureaucratic process of convincing other people is rational.
> either mathemathical or empirical proofs are basically just for other people, once you stumble upon an idea that is simple and beautiful i.e. elegant, you know it is true.
I think the problem is with the word "simple".
First, if we adopted this as the official rule of science, it would quickly escalate to status games, where various people would insist that the idea is "simple for them" and it's not their fault that others are too dumb to perceive its simplicity.
You probably wouldn't be happy with Einstein telling you "time is relative, because... well, it obviously *is*, right?" (Or, consider quantum collapse versus many worlds. This is a difference in interpretation rather than measurable data, but still each side insists that their idea is obvious to them.)
Second, mathematicians sometimes make mistakes, too. It is possible to make a mistake in a simple and elegant proof just because you made a wrong assumption, such as "prime numbers are odd". The rest of the proof may be simple and elegant, and yet the theorem could fail for e.g. two and its powers.
If an idea is simple and beautiful for an experienced mathematician, then it is probably 99% likely to be correct. Still worth to check for the remaining 1%.
Finally, there may be situations where we don't have a simple and elegant proof (yet?), but we still want to know the answer, and sometimes we succeed to arrive at it using a complicated and non-elegant proof. Mathematicians can still feel bad about it, for example the computer proof of the four-color theorem, but for the moment it may be the best we got.
"Can it be said the real math is the intuition, and the proof is just the bureaucracy around it, because science as a collective effort must function somehow?"
No, it most certainly cannot. Your intuition and a nickle will buy you a stick of gum. If you don't have the proof, you don't have anything.
It happens frequently (at least to me) that when setting out to prove something, I will have a clear and intuitive idea of why its true, and will be able to start writing the proof immediately. Sometimes I finish the proof just as immediately. Other times, the effort of writing it out carefully and rigorously reveals a subtle flaw in the line of reasoning that had been sketched out in my head. Usually I can find a way to patch that flaw. Sometimes it turns out to be bigger than I'd realized at first, and the whole approach must be discarded. If you want a really, exceptionally famous example, consider Pierre de Fermat. It seems highly likely that he had felt he had a solid, intuitive grasp of why a^n + b^n = c^n has no non-trivial solutions for n>2. And that statement is, as it happens, true. But the chance that his solid, intuitive grasp actually traced out the lines of Andrew Wiles' eventually proof is basically nil.
There's the gap between intuition and proof writ large for you: 300 years, and 129 pages. Even if some mathematician is such a transcendent genius that their intuition is literally never wrong, nobody else has any way to *know* that without a proof. And even if they just trusted such a person to be right, they could hardly understand it for themselves without it being laid out clearly and in full rigor.
I must apologize that this is very hard to put into words, so I might be not making sense, in words. Please try to read my mind :)
My basic idea was that the really important part is whether the crime was done or not, not whether the crime can be proven at court by the rules of permissible evidence, that is just bureaucracy. The deed is the real thing, the proof and evidence are basically just social rules. Or a better example maybe, it is more important to invent a better mousetrap than proving to the patent office that it really works and there is no prior art. The mousetrap matters more than the patent does.
"But the chance that his solid, intuitive grasp actually traced out the lines of Andrew Wiles' eventually proof is basically nil."
"nobody else has any way to *know* that without a proof"
Again this sounds too much like proving the mousetrap at the patent office is more important than actually inventing it. I think this has things backwards...
Yesterday I made chicken soup. Almost no one knows I did. And? It still happened. Why do I care who knows about it?
This might be true in some parts of mathematics, but elsewhere intuition is notoriously unreliable. Combinatorics and computational complexity theory seem to be like this: major conjectures are frequently disproved even when widely believed for decades. The truth manifold just doesn't have a nice predictable shape.
> You are allowed to do drugs...
Referring to Erdös and speed? I always aspired to have an Erdös number.
> Hardy … thought Hilbert was greater still.
The Man Who Knew Infinity suggests otherwise: "[Hardy] assigned himself a 25 and Littlewood a 30. To David Hilbert, the most eminent mathematician of the day, he assigned an 80. To Ramanujan he gave 100."
Oh yeah, I misremembered this part and lied about it, apologies to the ghost of Ramanujan and also Hardy, my bad.
That stuff about mathematicians being averse to applied work comes from someone I spoke to that declined getting a math PhD because he disliked what he perceived as a cultural aversion to finding applications.
And the claim about Ramanujan being something beyond genius comes from mathematician David Bessis, so noted that there are divergences of opinion among mathematicians.
Bessis also said mathematicians never read math books (as in, books only mathematicians could read) cover to cover, they only go look up specific things they need, is that true?
As to Scott's story, sure, maybe a feat of computation would be more difficult, but it also seems strange to expect a higher being to do something like that.
"Bessis also said mathematicians never read math books (as in, books only mathematicians could read) cover to cover, they only go look up specific things they need, is that true?"
Mostly true. "Never" is an exaggeration, but it's very normal to only read one specific chapter, or to use the book to look up some info. Or a book might cover material you already (mostly) know and you use it to refresh your memory from time to time, but not have to read every word. There's a saying to that effect that you never read math books, you only re-read them.
The logic of Scott's story is: "if DMT entities can perform a calculation, then they are real and we can prove it to everyone", not "if they are real, they 100% can do it". It's completely plausible for a real ghost or spirit to not be able to do advanced math (after all, you and I are real, and we can't pull it off), it's just that the story is funnier if they totally can and are annoying about it.
I don't know the timeline, but Ramanujan put in a lot of rigorous and formalized work on areas like the famous taxi-cab number. Did that work precede his seemingly spontaneous insight about 1729? I read an article that implied this, but I can't find it now. It could be that, in addition to being a bona fide mathematical genius, he was a bit of a showman or prankster when relating how he got his ideas.
In David Bessis' Mathematica, Bessis says he put out a lot of conjectures that would be proven only after his death, and he did say the conjectures were coming from his goddess. If they were arrived at primarily through work, it wouldn't have fallen to others to prove them no? Bessis is a mathematician, so it makes sense to believe his account.
Here's a different article that discusses his work on the topic, though it doesn't make clear whether his hospital-bed insight about the number 1729 came before or after the formal work: https://news.emory.edu/stories/2015/10/esc_taxi_cab_numbers/campus.html
I don't see how your second sentence relates to the first.
I am reminded of the quote: "God created the integers, all else is the work of man."
There is a sense in which you're right, that integers aren't sufficient to model quantum mechanics, but the tools of mathematics are richer than you seem to think, and have developed over the centuries to include some decidedly unintuitive objects: I'd say complex numbers are necessary, but also seem to be quite sufficient. If something comes up, we can go up a level to quaternions or higher. Whatever you're gesturing at with your "math that starts with the wave-particle duality" is probably isomorphic to one of those.
Now, there IS a problem with renormalization, but that's well beyond the Heisenberg uncertainty principle, and at any rate, the point is that those are still just PHYSICS problems: mathematics has tools to deal with that kind of thing, like zeta function regularization, for example.
I would be exceedingly surprised anyone working on quantum mechanics has any difficulty grasping complex numbers: certainly not the basics, and even complex analysis is generally considered remarkably elegant (at least among physicists; the math people will object to the lack of rigor in the pedagogy of how I was taught it).
Even if you can formalize it, I don't see what you hope to achieve with this vague notion of making interacting probability clouds the foundation of new kind of mathematics that we don't already have. I suppose if your intuition serves an infallible oracle for arbitrarily complex calculations of this kind, such a thing might be useful, but if you're going to fall back on calculations eventually (you are), less so.
Ooh, I've heard the culture of mathematicians is singularly averse to finding practical applications, I don't think they care about the relation of math to the physical.
I think they shouldn't indulge so much in the pure mathematics, but it sounds fascinating to me, proving a theorem, figuring out that something abstract is indisputably true. It's the only domain where you can be certain of something abstract, isn't it?
Can someone explain to me why the UK Prime Minister’s announcement that he will recognize a Palestinian state unless Israel fulfills a number of conditions, including agreeing to a ceasefire in Gaza, doesn’t all but guarantee that Hamas will not agree to any possible ceasefire offer put to them?
I was told their conditions for recognizing a Palestinian state specifically require excluding Hamas from its governance.
A possibly naive question but: how does Hamas even still exist as a meaningfully armed force after all this time?
Define "meaningfully armed force". Obviously they don't pose an existential threat to Israel (not that they ever did), but a bunch of armed people hidden around in random places is always going to be a problem.
You have thousands of people who don't wear uniforms, who move about the strip through tunnels, about 25% of which have managed to be destroyed so far, in a landscape where they have broad support from the population they come from, and a huge number of unexploded munitions that can be repurposed
It's not like they are going in and in groups of dozens shooting at people (several of these battles happened, when they concentrated in a few schools/hospitals, but those were cleared out). The pop out of tunnels, place bombs on tanks or shoot people.
Something like half of gaza has been taken over by the IDF at some point (they claim 75%, but whatever) but that doesn't mean they know where all the tunnels are.
You've had maybe 20 soldiers? killed in the last few weeks, that's not a huge number when thousands of israelis are operating in the strip.
It's not like the IDF knows who hamas is, where all their weapons are, etc.
But, in military terms, the IDF hasn't 'accomplished' anything for almost a year now. They get random people with guns, more younger and less trained boys are recruited to replenish the ranks, and so forth.
If every hamas member evacuated to another country, a few years from now another organization would take its place.
The announcement's stated terms are:
>unless the Israeli government takes substantive steps to end the appalling situation in Gaza and commits to a long term sustainable peace, including through allowing the UN to restart without delay the supply of humanitarian support to the people of Gaza to end starvation, agreeing to a ceasefire, and making clear there will be no annexations in the West Bank.
That sounds to me like Israel needs to lift the blockade enough to allow food through in quantity, offer a ceasefire on what the UK considers to be reasonable terms, and agree to peace talks on what the UK believes to be a reasonable basis. They probably don't need to actually implement a ceasefire of Hamas insists on continuing large scale hostilities, although the UK might then insist on Israel unilaterally taking some kind of defensive operational or tactical stance.
>That sounds to me like Israel needs to lift the blockade enough to allow food through in quantity,
There is no blockade. They are letting food in, in quantity. They paused food aid for a bit a month or so ago, but in recent months the main thing preventing aid coming in is that the UN refused to send in aid unless UNRWA was allowed to do the distributing, and Israel doesn't want that because UNRWA has been funneling food to Hamas (that's their point of view, anyway). The UN had hundreds of trucks full of food waiting just outside the Gaza border and Israel was asking them to send it in, and they were refusing, claiming that it was too dangerous (despite Israeli military offers to escort the trucks). Recently Israel paused fighting and created more secure transport corridors, and the UN is starting to let aid go back in. There is no blockade, just a disagreement about whether UNRWA can distribute the food with the UN refusing to send in aid for a while.
That’s literally all made up and only appears in some Zionist publications. Israel is prohibiting aid and Hamas is not stealing aid.
Is the AP a Zionist publication? (https://apnews.com/article/aid-gaza-hunger-united-nations-e703faaaba945e838aabfb3c7fa32d70)
>Israel says it doesn’t limit the truckloads of aid coming into Gaza and that assessments of roads in Gaza are conducted weekly where it looks for the best ways to provide access for the international community.
>Col. Abdullah Halaby, a top official in COGAT, the Israeli military agency in charge of transferring aid to the territory, said there are several crossings open.
>“We encourage our friends and our colleagues from the international community to do the collection, and to distribute the humanitarian aid to the people of Gaza,” he said.
>An Israeli security official who was not allowed to be named in line with military procedures told reporters this week that the U.N. wanted to use roads that were not approved.
>He said the army offered to escort the aid groups but they refused.
>The U.N. says being escorted by Israel’s army could bring harm to civilians, citing shootings and killings by Israeli troops surrounding aid operations.
Thanks. I've been hearing conflicting claims on that front and was mostly going off of the "allowing the UN to restart without delay the supply of humanitarian support to the people of Gaza to end starvation" bit in the British announcement.
Multiple high-ranking Israeli officials, as well as leaked internal documents, have proposed exactly we are seeing: push the Gazan population into the south and starve them, while also destroying the civilian water, sanitation, and medical infrastructure necessary for survival.
The blockade is still in effect. What ended is the *total* blockade. Currently Israel lets only a small and insufficient amount of food into Gaza and then fires on civilians who attempt to retrieve it. Obviously Israel denies this, but who are you going to believe? It's their word against virtually every independent organization attending to this issue.
Yeah, the "allowing" they are referring to is complying with everything the UN is demanding before the UN will send the aid in. Israel wants them to send in the aid, they're not stopping them.
Depends - "Recognize a Palestinian State" is a flexible concept, and could potentially include:
"We recognize a Palestinian State in Gaza and the West Bank, the legitimate government of which is... Fatah!"
This (or mere recognition of a West Bank Palestinian State that does not include Gaza at all) is likely an outcome Hamas would not want.
Of course that what it means. In no imaginable world do any European state unilaterally recognize Hamas as the legitimate government anywhere
It absolutely does guarantee it, but also this position has been massively overdetermined at this point.
Hamas has made it clear they will reject any ceasefires.
They are only interested in ending the war in exchange for being able to stay (not in 'political' control) the dominant military power in Gaza, similar to Hezbollah's situation in Lebanon (before they lost a war and are on the path now to disarmament)
There's no 'pressure' they are responsive to. Whether europe pressures israel to a two state solution doesn't meaningfully affect Israel's capacity to get them to surrender. Maybe large scale population transfer, or annexing a massive part of Gaza would do it, but I highly doubt that's on the table.
The other thing I don't get is: if Hamas leadership is hanging out in Qatar, why is there no Western diplomatic pressure on Qatar? I mean, would it kill us to say "No more Qatar Airlines flights until all Hamas leaders are handed over to Israel" or something?
What do you imagine happening after Israel hangs/shoots all the Hamas leaders in Qatar?
1) Qatar bribes a lot of people with a lot of money. They have a lot of soft power. They just gave trump a 400 million plane
2) Realistically the problem has always been that hamas is not answerable to their political leadership abroad. And you need 'someone' to be the face of negotiations.
It's really important to recognize that hamas political leadership were fine taking bribes and living in luxury abroad. They were in for the cause, as far as their children martyrs are concerned, but it's not like they signed off on oct 7th. Sinwar basically did a coup against the political leadership, and even iran+hezbollah were vaguely in for a war against israel 'in the future' but he miscalculated.
If you pressure Qatar, you lose a useful intermediary.
Heck, Qatar+Saudi Arabia just signed onto asking Hamas to resign and disarm, which would have been unthinkable a few months ago.
People seem to be concerned about payment processor censorship, but there's actually very little discussion on what can be done about it, save advocating for the S.401 Fair Access to Banking Act. Problematically, people now think that if the act were to pass, the payment processor censorship problem would be fixed. This is not true. The penalties are too low and practically unenforceable. As in, it couldn't be enforced even if it were to pass. The bill needs alterations and nobody that matters even seems to know that.
The only person I've seen mention this as a point of contention is Josh Moon on the Kiwi Farms, who talked to several lawyers about it.
Here's a link to where he talks about it. Just a warning, this isn't intended for the audience of this blog. It's mostly intended for right-wingers upset that they've been prevented from supporting their super edgy websites and who expect that to continue to be an issue in the future. It's also clearly a vent post about payment processors. Hence, there are many, many slurs and the language is more emotional than is strictly helpful.
Basically, reader beware:
https://kiwifarms.st/threads/s-401-fair-access-to-banking-act.224421/
Onion: https://kiwifarmsaaf4t2h7gc3dfc5ojhmqruw2nit3uejrpiagrxeuxiyxcyd.onion/threads/s-401-fair-access-to-banking-act.224421/
Josh also went on a YouTube podcast with Kurt Metzger to talk at length about payment processors. This is good because he's the foremost expert on the subject of de-banking, but is seldom heard in any discussion of it, left or right, because both sides hate him for running his gossip website as he does, and for his belligerent personality.
The sense I have is that this is a people-applying-soft-pressure-to-payment-processors problem, ie, there's some kind of religious group that applied soft pressure and the payment processors caved.
If that's true, then it seems like this is that rare type of problem that is *exactly* solved by getting really angry about it on social media. If the culture warriors on twitter can apply more pressure to payment processors than the religious group, then the problem should go away.
It's possible I'm wrong and there's some sort of actual lawsuit threat from the religious group?
(Apologies, I did not click your link to the slur-filled rant. I try not to read that sort of thing.)
I work in this field (smut) and you just get constantly kicked around by everyone, but if it weren't for mechanical business reasons, somebody would have jumped on getting the cash from what is, ultimately, a field with a lot of money in it. SubscribeStar got a bunch of clients because Patreon started kicking around adult content creators, and I was in that group.
The main problem is something like: John sells some porn to Bob. Bob's wife, Alice, sees the line item on their credit card bill for "Hot Teen Sluts," and goes to Bob to ask what this is. Bob says (lies) that he has no idea, so Alice calls the credit card company to dispute the charge. This heightened risk of payment dispute is broadly true, though it can happen for other reasons (kid getting the card and the parents being more willing to dispute than in the case where he bought a bunch of Fortnite skins, guy getting pissed that the camwhore didn't actually love him, etc). This produces a cost to the credit card company, so the % they take from the original company mechanically goes up to compensate for Bob. If you are a company which mixes sales of both porn and non-porn content (e.g. Steam, SubscribeStar, Patreon), you want the non-porn rates, but you are in fact selling porn content, which is in fact at an increased risk of payment disputes.
The solutions I've seen are:
- banning all porn from your platform outright (generally done at the outset);
- bizarre and arbitrary ban lists to try to reduce payment disputes ("Reincarnated In A World Where Women All Love Being Violently Raped" presumably having a higher decline rate than "Big Titty Elf Island", but then you get all the standard issues of censorship, with the person making the list not able to do an actual analysis, and also frankly not caring to do so); and
- just having some type of partitioned sub-platform where they take a bigger fee (this is basically what SubscribeStar does).
I've heard that argument before and thinks it carries a lot of weight, but the question for me is - if payment processors only look at their spreadsheets and dispassionately cut out the (category of) customers with the highest risk of chargeback, why does it take a private initiative (Collective Shout) for them to take action? Or was that coincidence and Collective Shout got to claim credit where none was due?
The typical complaint I hear about porn companies having to shut down due to payment infrastructure threats is that credit card companies are run or infiltrated by prudes who use private enterprise to do what the government is forbidden to do. This might be the first time I've heard an argument that factors in the obvious profit motive to stay in the porn business, and honestly, it makes sense that these purchases get declined so often that it's denting the margin.
So now I wonder how much of the higher rates / shutdowns traces back to merely this, as opposed to prudes.
Thanks for the clear explanation!
It sounds like the theory is: "this Australian activist group claimed credit for making the payment processors make Steam de-list all those incest games by doing a bunch of call-in complaints, but a more likely explanation is that the payment processors decided incest games carried too much risk of chargebacks and they're just not supporting them anywhere."
Is that right?
Probably, yeah. Patreon tightening the screws followed the same general pattern of adding increasingly arcane rules about what you can publish, and I'm not aware of any particular pressure campaign on them.
e: Well, probably it went "complain to Visa" -> "Visa thinks Steam is at increased risk of chargebacks" -> "Steam doesn't want to deal with it"
No problem, I put the warning there for a reason. The payment processor problem is likely not going to be solved by public outrage because I suspect it wasn't caused by public outrage. Payment networks have been doing this for years, and have been completely ignoring all the outrage that came before.
All of the people and companies (PornHub, SubscribeStar, Gab, Hatreon, DLSite, Wikileaks, gun retailers, Canadian truckers' GiveSendGo Pages, and many more) apoplectic with rage at losing business should have accomplished something if it were true that they respond equally to different types of outrage. I suspect this stubbornness is because they are in favor of these bans for ideological reasons. To address the broader point, it's a chronic problem and people should not have to marshal an outrage mob just to participate in e-commerce. That means a counter-mob is not really a satisfactory solution to the underlying problem.
Does it even need enforcement? The existence of such a law, even with nominal penalties, would let Visa and Mastercard point to it as a way of resisting public pressure, and continue taking a cut of "immoral"-but-legal activities they facilitate. They're incentivized to lobby in favor of it, and I wouldn't be surprised if they were surreptitiously doing so.
I don't know why you're assuming so much about the character of those with authority in these companies. Perhaps they like exercising their power to rid the world of pornography? Capitalism is not some ultra-efficient process that selects out things like puritan sensibilities when the damage is a speck next to their profits. And payment processors are not creatures of capitalism anyway.
An actually effective ban would be the selecting pressure in this case because it might actually result in such characters no longer being effective as leaders for payment processors. A toothless ban isn't going to accomplish anything unless the leadership is in agreement that they don't wish to do this and only need an excuse not to.
That is the case with plenty of tech leaders. Like Matthew Prince of CloudFlare who at first defended Kiwi Farms from deplatforming out of his libertarian principles, and then buckled to public aggression. A ban like the one proposed would help a boardroom of Matthew Princes. It'd give them the necessary figleaf to go with their guns. But I don't think that's what we're dealing with.
I think we're dealing with true believers, some true believers at the very least.
I'd be interested in seeing a journalist investigate the personal lives of the people running these opaque companies, dig through their garbage, that kind of thing. I suspect you might find a real nasty customer posing as a Matthew Prince type.
Even if that's not the case, if they're lazy and profit driven, they might just keep banning things after email campaigns (in the case of Steam deleting adult games) and phone calls (in the case of PornHub deleting the majority of its content). Who can put a number to bad PR? Might as well just get rid of it, not like they're going to give us a real fine.
He's back in the United States now to do advocacy for a free internet. He made an org called the United States Internet Preservation Society to lobby and everything.
Added it in. I think UK users get warning screens directing them to vpns or the Onion link anyway, but it's good digital hygiene.
On colonizing space. Am I stupid or is it stupid? Literally everywhere on Earth, from Antarctica to the ocean is a better place to live than Mars. Why not go there first?
Asteroid mining? Just do it with robots fam. Why would you want to live in a coal mine if robots can do it?
Wait, it gets better. Humans are not expanding into unhospitable environments, we are retreating even from very hospitable ones. There are whole villages, even more or less whole rural countries in France and Italy who are going empty, partially demographics and partially urbanization.
Apparently young people do not even want to live in pretty French villages with good air, soil, water and everything.
Why would we want to colonize space? Shouldn't we try to colonize those pretty and very hospitable villages first?
I think Mars is not a good place, except maybe to build some huge massively environment polluting factories. It has no water, no air, and is even smaller than Earth.
On Jupiter and Saturn, the gravity would crush us like bugs. On Uranus and Neptune, Sun is so far that it seems like just another star... maybe a little bit brighter, but definitely not giving you enough heat to survive.
On the other hand, I think -- if it is possible -- it would be nice to have a backup for humanity in catastrophic cases like "a supervirus appears that exterminates all humans on Earth", "a huge asteroid we cannot deflect hits Earth and destroys all multicellular life", "a superintelligent AI succeeds to kill all humans (but also destroys itself, so it fails to expand to the universe)", or even "a planetary government stops progress and creates a new Dark Age".
But if we ever get this backup, I think it will be in form of "humans living in space ships" rather than colonizing other planets. The planets are just... not useful the way they are now, and it would be unimaginably costly to fix them. And even if we got lucky and found a suitable planet a few dozen light years away, the way there would be so long that if humans can survive the trip, they can probably survive staying in space indefinitely.
Mars has some air and some water, which we can use for all the usual purposes after applying basically 19th-century chemical engineering (some of which has in fact been demonstrated on Mars).
And Saturn's gravity is about the same as Earth's - the planet is larger than Earth, but also less dense. In principle, a "cloud city" floating in Saturn's atmosphere could have Earthlike gravity, Earthlike atmospheric pressure, ready access to oxygen and water, and temperatures comparable to Antarctica. There are engineering reasons why I wouldn't recommend this as a near-term target for extraterrestrial settlement, but if our civilization survives the next few decades we'll get there eventually.
Cool, thanks for the info on Saturn!
I believe you are exactly correct. Colonizing space may be a worthwhile goal to pursue one day, but not at our current level of technology. It's a ridiculous pipe-dream for the forseeable future. Even asteroid mining with robots seems fairly suspect at the moment. You need to be bringing back some absurdly valuable payloads to cover the kind of costs that would incur.
Incidentally, I think if humankind *is* going to make any serious use of resources beyond Earth's orbit, better systems for getting things off the planet will be a necessary per-requisite. Chemical fuel rockets aren't going to cut it. There are a number of very interesting proposals for planet-based installations to help stuff reach orbit, but they're all very speculative (read: they're just shy of mad science and it's wonderful).
I think you're right. I've found this essay "Why Not Mars" very persuasive (https://idlewords.com/2023/01/why_not_mars.htm).
Since the 1960s, computer technology has improved by a dozen orders of magnitude while keeping-people-alive-in-the-vacuum-of-space technology has hardly budged. It would be a great idea to colonize other planets, but we're far, far further from that possibility than many hope.
I also am very suspicious that Elon is using a promised mars mission to whitewash his image, kind of pascals-mugging all the mars hopefuls while doing a bunch of bad things to people on earth.
There are possible catastrophes which could kill all humans in the lightcone, all humans on Earth, or all humans on Earth except those in very remote locations. All of these are worth taking preventative measures against. There should be efforts put toward having people living underwater, and people living in Antarctica, and people living on Mars, and people living in as distant parts of space as we can reach, because each of these slightly increases the species' odds of survival.
People living on Mars are more distant places are no hedge at all for the forseeable future. You know what happens to people living on Mars if Earth gets wiped out? They die too. Most disasters that would actually wipe out everything on Earth would also get Mars all by themselves. But even if they didn't, "Martian colony that can sustain itself with no external help" is so laughably far beyond "baby's first Martian colony" that you can barely hold them both in your field of vision at once.
The purpose of early Martian colonization efforts is to bring us closer to a self-sufficient colony. People are pretty clear about that.
Even so, a self-sufficient Martian colony does absolutely nothing to hedge against most of the plausible existential risks. It’s certainly no protection against nuclear war, AI risk, and probably not pandemic. It’s really only geological risks that it helps against.
The main point of a self-sufficient Martian colony is preparation for a self-sufficient out-of-solar-system colony.
Which is relevant, but probably not so relevant that trying to work out the details matters this century.
I do not believe that X-risk avoidance is a good near-term argument for space settlement, and have I think explained that elsewhere. But the four-plus month travel time to Mars with near-term technology would I think pose an adequate buffer against most pandemics. If Patient Zero infects two people on Earth, one of whom embarks on a voyage to Mars the next day, then Earth will almost certainly be experiencing an unambiguously-recognized pandemic by the time the ship reaches Mars. The Martians will know what to look for and what to do about it, and a Mars settlement will have lots of compartmentalized, hermetically sealed habitats if needed for quarantine.
That's a good point, certainly for things like standard respiratory viruses. Less effective for something with characteristics more like HIV (which spread to a lot of places before it was even identified as a concern), but it's possible that genetic tools would allow more effective testing in the early years after the virus is identified, and earlier identification of the virus.
This is a little like saying "the purpose of Archimedes messing around with fluid displacement was colonizing the Americas." Technically speaking, those two things do have a relationship. But "better understanding of how things float" was ridiculously far down on the list of barriers to the classical world doing something like that.
The things that are needed to establish a self-sufficient Martian colony are technologies that are getting really quite close to the "indistinguishable from magic" end of the scale. Particularly, they would need to include things like hugely durable, long-lasting materials, incredibly cheap, reliable and compact energy storage and generation and above all manufacturing processes that can do vastly more with vastly less than anything we have now on Earth.
Now, those technologies would be great to have. But they'd be great to have *on Earth.* On the list of reasons why people might want them, "allowing someone to do a Mars colony" is really quite far down. So I don't reasonably expect such an effort to speed them up in any reasonable degree.
Until we have those technologies--at least on the near horizon, if not on hand--shoving people into tubes filled with combustible liquids and shooting them millions of kilometers away is really not going to help. There's a lot of good reasons *not* to pour the staggering amount of resources and human talent into that effort now, now when it's pretty much guaranteed to get cheaper, safer and faster long before "self sustaining" is a realistic-looking goal.
I'd be very happy to see more cutting edge scientific studies of Mars. But once we can admit that we're nowhere near a Martian colony, we can also admit that robots are a much better choice for that than humans right now. Realistically having an excuse to work on automation and miniaturization tech will be far more useful even from a "wanting to establish an eventual colony" standpoint than sending humans would be.
I'm laughing so hard that you led with "all humans in the lightcone." Such a precise way of saying "everybody dies." But yes, broadly speaking I think your comment is excellent. Survival hedging means diversifying your population into inconvenient and probabilistically low usefulness locations just like financial hedging can mean buying inconvenient and probabilistically low usefulness assets.
I think it’s worth noting that if starship were to achieve it’s stated goals in terms price per kilogram to orbit, it would actually be faster and less expensive for most people to go to low earth orbit than to get to interior or potentially even coastal Antarctica.
The movie Elysium, which I have never seen, but of which I've watched parts over the shoulder of someone else on a plane, makes the best case for space colonisation.
Space colonisation only makes sense if you can make space nicer than Earth, which means big Stanford Toruses in orbit within reach of other places you might like to visit. Making the physical environment better than Earth is challenging (although potentially reduced gravity might be nice) but you can make the political and social environment better; people can start new colonies with independent governments that follow whatever rules they like, and keep out whatever sort of person they consider undesirable.
The true currency of man isn’t money, it’s motivation.
There’s a very simple way to produce motivation in other people. Money. But that doesn’t mean money is the only way to convince people to do things. Religion, patriotism, ideology, also work pretty well in the right contexts.
The inevitable consequence of humanity is space colonization. There are literally a near-infinite amount of resources out there for us to access, and given a long enough period of time, we’ll have solved all our problems here on earth and moved onto imagining problems elsewhere to solve. Think of the 99.9999% of the sun’s energy that’s just wasted! Think of the 99.99999% of Stars that are just wasting their energy too! Consequently, we like to write stories about our future that anchor many people’s thoughts in that future, like how the Bible might anchor the thoughts of a 12th century Crusader.
Space is motivating. It gets people inspired. We tell compelling stories about the future, and this serves to reinforce the drive to go to space. We continue to tell stories about the future because the sorts of environments that create interesting stories aren’t easily created with modern technology, so we assume advanced technology that allows us to manipulate the conditions our characters must battle against.
Space colonization is the fulfillment of that. So long as there are men like Bezos, Musk, and the many, many people who work for them that are inspired, and thus motivated by Space, there is a significant financial incentive to go there. It’s as if God came down from heaven and offered a hundred billion dollars to the first person to set foot on Mars. Except instead of God motivating us through currency, it’s our love of a certain type of story the motivates us inherently.
I've argued the Antarctica critique myself and still mostly agree with it, but there are three dimensions in which the Moon or Mars might be a better candidate for colonization than Antarctica. I heard two of them in a past open thread (from John Schilling, I think) and figured out the third on my own (although I'd be surprised if I'm the first to have thought of it).
First, if you're making stuff that's going to end up in space anyway, be it satellites, probes, space telescopes, or infrastructure and supplies for manned space flight, then it's a lot more convenient to be able to make it on the Moon. Because of gravity wells and the rocket equation, it's much, much more efficient to move stuff to low earth orbit from the surface of the Moon than from the surface of the Earth. If you have enough demand for stuff in Earth orbit or elsewhere in space, and you have a reasonable way to set up mostly-self-sufficient mining, manufacturing, and launch infrastructure on the Moon, then a small Moon colony becomes an appealing idea. Ideally, it would be mostly automated, but you'd still want at least a small crew of humans to deal with unexpected issues.
Second, while Antarctica has several big climate-related advantages over the Moon or Mars (warmer than night or shade on the Moon or all but the hottest parts of Mars, actual breathable atmosphere, and abundant surface water), it has the disadvantage of having actual weather, and Antarctica's weather is abominable. The weather turns the latter two advantages into monkey's paw cursed versions of the things you'd wish you have more of on the Moon or Mars. The abundant surface water is in frozen form and is inconveniently piled atop soil and mineral resources, and the breathable atmosphere tend to move around annoyingly fast. Mars gets wind storms, too, and those would be potentially dangerous to colonists, but Antarctica's wind storms have a lot more mass behind them and tend to move abundant surface water around besides, burying structures and unsheltered people in snow unless you're careful and diligent about wind shelter and clearing snow off of stuff.
The third has to do with sunlight. Full direct sunlight under optimal conditions (clear sunny day at solar noon in the tropics) on the Earth's surface is about 1 kW per square meter. On the Moon's surface, it's about 1.4 kW/m^2 (same distance from the sun, but no atmosphere in the way). And on Mars, it's about 400 W/m^2 (some atmosphere but less than Earth, plus inverse square effects from being further from the sun). This is important for solar electricity generation and for growing crops. You literally never get the full kW per square meter of sunlight in Antarctica or anywhere close to it because the sun is never anywhere near directly overhead. The lower the sun is in the sky, the more air the sunlight goes through before reaching the surface, and the more ground area the same amount of direct sunlight is projected onto. I'm having trouble finding figures I'm confident I can compare apples-to-apples, but my best estimate is that the interior of Antarctica gets about 30% as much sunlight over the course of the year as Earth's tropics, about 75% as much sunlight as Mars's tropics, and a bit over 20% as much as the Moon's tropics.
Right on all three counts, and glad to know that my previous writings on the topic weren't entirely wasted :-)
W/re Antarctica, while the bulk of the continent is as you note nigh-uninhabitable and also completely worthless for any purpose beyond science and maybe extreme adventure tourism, the coastal regions are another story. Those are not really a worse place to live than e.g. Barrow or Svalbard or Novaya Zemlya, and probably have the same level of resources, so we would expect them to have the same level of settlement. Rather like Greenland, with an empty interior but still 50,000 or so people and even a small city on the coast - but Antarctica is seven times larger.
Unfortunately, Antarctica is locked off by an almost universally accepted international treaty that says the only allowable activity is science. I believe Chile and Argentina have tried to establish de facto settlements by claiming they're just being family-friendly in allowing their "scientists" to bear and raise children, but that's pretty much a dead end. Fortunately, we haven't agreed to anything that daft in space (though it was a close call back in the 1970s).
Earth orbit, as you allude to, is already the site of broadly profitable activity to the tune of nearly a trillion dollars a year, and that's likely to expand by an order of magnitude if launch costs drop to anything like the levels Musk, Bezos, et al are expecting. At that point, yes, it's definitely both practical and profitable to set up mines (and mining towns) on the Moon. But also on some of the near-Earth asteroids, and possibly the Martian moons, all of which are roughly "equidistant" from Earth orbit in energy terms and which have different resource profiles. And while Mars is a bit "farther" out because of the gravity well, it's *still* easier than hauling stuff up from Earth and has yet a different set of resources. Tourists staying at the LEO Hilton, may be drinking Martian wine because it's both more exotic and cheaper than the Earth stuff.
And you're right about the solar energy, but you're not even close to the first to think of it - that was the "killer app" that Gerard O'Neill and company proposed for space settlement and industrialization in the 1970s. If you want solar energy on Earth, and particularly on some part of Earth that's not next to a tropical desert, the most efficient way to get it is to put your solar collectors out where you get 1.4 kW/m^2, all day, every day, with no worries about hail or dust or wind messing up your solar panels, and just beam it to where it is needed. Which, yes, we know how to do safely and in a way that can't be turned into a death ray.
But the economies of scale mean that it's only cost-effective if it's done *big*, with individual "powersats" having large-nuclear-powerplant level outputs and with hundreds of those to amortize the cost of the industrial facilities you'll need to assemble them (and with mines on the moon, etc, as above). That's less likely to be the killer app now, because the 1970s "energy crisis" is ancient history and because solar panels got cheap enough for us to start building en masse on Earth before space launch got cheap enough for us to put them in the sky. But now we're reaching the point where one of the limiting factors is NIMBYs blocking the construction of power lines connecting the places with lots of reliable sunlight to the cities with lots of demand, so there might be room for someone to make a profit by putting everything but the receiving antenna in Nobody's Back Yard.
Good point about space-based solar power being beaned back to Earth. I think I first learned about that from Sim City back in the early-to-mid 90s, and in more detail from more serious sources later on. I even had a hair-brained idea in high school for using SBIR contacts to bootstrap a startup that would eventually put solar power satellites in low polar orbits. The idea was way over my head and I had absolutely no realistic shot of getting it working from either a technical or business perspective, of course, but I had ambitions and some clever notions and that seems like all that matters when you're 16 or so.
While writing my comment in this thread, I was thinking more about sunlight as a resource for use directly in the colony or outpost in question. Growing your own food and generating your own power saves the bother of shipping in food and fuel, and sunlight makes that a lot easier to do on the Moon, in that respect at least, than It would be in the interior of Antarctica. On the other hand, food and fuel are quite a bit easier to ship into Antarctica than they are to send all the way to the Moon.
The issue of sunlight for crops is on my mind mainly because of a Heinlein novel, Farmer in the Sky, about terraforming Ganymede and setting up a farming colony there. He does have a passage where the main character talks about how much dimmer the sun is on Ganymede than on Earth because of the distance, but this doesn't seem to inconvenience the crops very much. Some kind of greenhouse-like "heat trap" is used to excuse the colony being hospitably temperate, but the crops seem to do just fine on a tiny fraction of the sunlight they evolved to grow in, and the main limiting issue for agriculture is turning regolith into fertile soil. Once I tried my hand at gardening and realized how many food crops need several hours of Earth-normal direct sunlight a day to grow decently well, this oversight started bothering me. I tried to figure out how bad it would be for growing crops on Mars and came up with the answer that Mars's tropics get similar amounts of sunlight to Alaska or northern Canada (or coastal Antarctica, probably), which suggests that agriculture on Mars would be inconvenienced by want of sunlight but not fatally so.
Part of the problem with colonising Antarctica is political, the Antarctic Treaty explicitly prohibits actually doing anything commercially useful with the continent. Some kind of hotel built on the northernmost tip of the continent would, I think, actually be viable, but not sufficiently so for anyone to risk rocking the Antarctic Treaty boat, especially since the obvious place to build such a hotel would be somewhere in the overlapping Argentinian and Chilean claims.
Yeah, I have the fear that the first thing to happen after "we've opened up Antarctica for colonisation" is "and now these South American countries are going to war over whose territory is where".
I can't realistically visualise anything there but "lots of mining and mineral extraction" and that's a rather grim and depressing prospect: see the glorious slag heaps where once we had pristine natural environment (yes, I realise it's "snow and penguins" but we already got lots of slag heaps). Still, if potential colonists get eaten by shoggoths, we can't say we haven't been warned!
Right, this is the logic of the Antarctic Treaty. There might be some economic value there, but that economic value would quickly turn negative if we started fighting over it, so let's just all pretend it's not there.
As a kid in Australia, all our maps of Antarctica showed the continent pizza-sliced into various national territories, Australia's being by far the largest (albeit rudely bisected by a tiny slice of France). But it turns out that not everybody else's maps necessarily respect those claims.
Background: aerospace engineer, pro-colonization but not outrageously so.
The Antarctica critique is a fairly well-known one, and there's basically nobody who will dispute it from a strict economics standpoint. The usual case is some combination of talk about man's destiny and pointing out that there's a lot of potential utility from space colonization that you mostly don't see from deep ocean or Antarctic colonization. (In both cases, if you need someone, it's fairly easy to bring them in from outside in a way that isn't true of space.) Which brings us to:
>Asteroid mining? Just do it with robots fam. Why would you want to live in a coal mine if robots can do it?
Because robots can't do it, at least not all of it. Obviously you're not doing asteroid mining by sending out a guy in a spacesuit with a pickaxe, and there will be a lot of robots. But we are still quite a ways away from robots being able to solve arbitrary problems nearly as well as humans can, and particularly at first, I expect asteroid mining to throw up lots of arbitrary problems. Maybe we'll eventually reach the point where we have enough experience to be able to build a machine that doesn't need to have people nearby to fix problems, but that will not be our first machine, or our 10th.
So if we need arbitrary-doing-things capability in space (let's say that we realize we're about to run out of Platinum on Earth and go for asteroid/lunar deposits) then we're going to need to send people, and the economics of this are such that we really are going to want to have them live there for quite a while. If you're somewhere like Earth Orbit or Luna, then the people go up for 6 months and then come back, which is absolutely a thing that happens in both Antarctica and the ocean (oil rigs, modulo transit time considerations). But if you're going out to Mars, then transit time alone means you're likely to want to stay quite a bit longer, and "let's just have our population be here forever" starts to look pretty enticing. As does letting people stay permanently on Luna if they want to and there's enough activity to support that.
(I'm pretty firmly in the "economic value" camp and less in the "Man's destiny" camp, so I can't speak for them, sorry. I am extremely skeptical of it as an anti-X-risk thing because it's going to be an extremely long time before a colony can be self-sustaining without support from Earth.)
I'm not a big proponent of space colonization, but I think i get it.
We know for a fact that (in a long time) the earth will eventually become unlivable for human beings. If humans don't figure out how to sustainably live off planet, that means we know for a fact that humans will die at that point as well.
It's not driven by any particular contemporary benefit, it's a minimization of X-Risk
> We know for a fact that (in a long time) the earth will eventually become unlivable for human beings.
And it will be a very, very long time until Earth becomes more uninhabitable than any other planet. If you can colonize other planets, you can "colonize" Earth as well.
I'm not sure what you mean, sorry.
Eventually, the sun will expand and "swallow" the earth. Are you saying that at that point, it would be easier to live on earth than another planet?
Or are you just saying that the concern is far enough in the future that it's not worth worrying about now?
I think the appeal of space colonization (compared to something like nuclear war prevention) is that most X-Risks are probabilistic or uncertain. "Changes in the internal workings of the sun will render the earth uninhabitable in a few billion years" is basically guaranteed, and some people don't like the idea of "Yeah, we'll deal with it when we get there"
I think Jim meant that by the time the Earth becomes uninhabitable (e.g. due to high temperatures from the expansion of the sun), we'd have the technology to re-terraform "colonize" the Earth to make it habitable again.
This obviously won't work once the Earth is vaporized.
There is also the unknown X-Risk of an asteroid impact or something of the like. We don't really know where most of the asteroids are and pretty much at any time the earth could become unlivable for human beings. Even human threats like nuclear war or plague add to this unknown risk. By creating colonies in environments across space we have a better guarantee that humanity will continue to eek along even if something wipes everyone on earth out.
Nuclear war and pandemics would take out a space colony too, at least if it’s on the moon or mars.
It sure seems like by the time we're good enough at working in space to have a self-sustaining Mars colony or two, deflecting incoming objects that are much smaller than the moon will be something we can do.
Firefox had a link to an article about groundwater pumps throwing off the Earth's rotation. https://www.popularmechanics.com/science/environment/a65515974/why-earth-has-tilted-science/?utm_source=firefox-newtab-en-us
After reading it, I don't understand it. So, now you all can read it for me, and tell me if it's true.
To rephrase the same claim less sensationally, the rotational pole of Earth has shifted by 7.5 millionths of degrees as a result from water pumping.
While it is certainly neat that people can measure this and attribute it to a cause, it also does not seem like the kind of thing which will destroy all life on Earth.
How the Earth spins (i.e. the speed of the spin and the axis along which it spins) depends on how mass is distributed around the planet. We don't live on a perfectly uniform sphere, instead it is slightly oblate, made of layers of different materials, and has some parts which are heavier than others.
If you change how that mass is distributed (e.g., by melting the layer of ice around the top of it and then spreading that around as water; or by pumping water out of the ground and pouring it into the oceans), then it changes how it spins.
The surprising thing to me here is how much water we've pumped out of the ground. I'm not going to do the calculations, but it seems reasonable that moving two trillion tons of water around has affected the distribution of mass and so slightly altered how the world spins. Of course it is a very small change, but then two trillion tons is also pretty small compared to the whole mass of the planet.
Oh, is the ice layer the bit that's supposed to affect sea levels? Like it rotated into a warmer area or something? I didn't see how that connected at all.
Changing the rotation won't affect the polar ice at all. We're talking about a few inches here, so the change is really insignificant climate-wise.
The point is more that if you shift the way mass is distributed on the sphere then you change how it spins. Part of that comes from losing ice mass at the poles which then spreads into the ocean. But here they are saying some also comes from pumping water out of the ground and using it (i.e. taking it from underground rocks where it had collected and then using it for farming or whatever, that then ends up going into the oceans rather than back into the ground). In the paper they link they say this has also raised the sea levels slightly, but it is on the order of millimetres, so nothing to really be concerned about.
There's no real panic here. It doesn't matter that the axis of rotation has shifted by a few inches. It won't affect the climate or anything else, really.
Reading it again, I think they're trying to say that tracking how this rotation shifts could help to better monitor how water is redistributing around the planet (e.g. from ground water, from the ice sheets). But I'm doubtful you'd actually be able to tell much from it in practice.
It needs much more than that to tilt Earth's axis! https://en.wikipedia.org/wiki/The_Purchase_of_the_North_Pole
Relevant to ACX circa 2022 - that old cash transfer/EEG study fails to replicate in a new RCT (among other disappointing findings) in this NBER paper: https://www.nber.org/papers/w33844 reported on today in the NYT.
The overall findings (no effect of $4,000/yr cash transfers to mothers below poverty line with newborn children on various cognitive, developmental, and behavior outcomes at age 4) are of course a downer, though like with Head Start I suspect any benefits would be in less directly "cognitive" life outcomes: so, HS graduation, incarceration rate, employment as an adult) vs. "can a 4-year-old rotate shapes in their head.
One maybe-saving-grace: the covid pandemic hit halfway through the observation period and most of the participants in both groups got a boatload of stimulus money, possibly diluting the effects of the study money.
The only salient thing would be if the large cash group reproduced more than the low cash group.
I was wondering at first "so what did the mothers spend the money on?" and then I read this:
https://www.nber.org/system/files/working_papers/w33844/w33844.pdf
"60% of mothers randomized to the low-cash gift group"
That's a whopping $20 per month. An extra twenty bucks is nice to have, but I don't see it making the huge differences expected.
So what about the 40% of mothers who got the $333 per month? That's more in line with the kind of extra money that makes a perceivable difference in quality of life, when you're below the poverty line.
This bit is just sad: it's not auguring well for your chances in life when your mother gets sent to jail before you turn four:
"For this data collection, 984 of the original 1,000 mother-infant pairs remained eligible (there were five maternal deaths, five child deaths, two maternal-child separations, and four instances of maternal incarceration)."
I still don't see anything in that PDF about what the money was spent on; if mom buys booze, cigarettes and drugs with the $333 it's not going to the baby. If the money goes to things like paying the electricity bill, that relieves one source of stress and improves the environment, but it's still not like buying better food or enrichment material for the kid's developing brain.
The whole thing seems very scrappily designed, was the $20/month meant to be some kind of control group or what?
"One thousand racially and ethnically diverse mothers with incomes below the U.S.
federal poverty line were recruited from postpartum wards in 2018-19, and randomized to
receive either $333/month or $20/month for the first several years of their children’s lives."
An extra $300+ per month is good, but if there's no accounting for what it was spent on or how it was used, then you can't really say if the cash transfer was good, bad or indifferent for the kids (maybe without that extra money they'd have done worse on the tests). Maybe at that level, $300 is not enough and if you made it $3,000 per month you'd see real differences. Maybe it's not the money, it's the genetics and the environment and poor parenting.
I think when doing a cash transfer study ignoring how the money is used is sort of the point. You can tack it on as FYI data, but cant be part of the evaluation itself. You are testing the efficacy of a hypothetical program that will not attach conditions to how money is spent. You are evaluating on whether a measurable outcome was improved. You know some money will be wasted, you dont know how much and also dont want to have to adjudicate edge cases as letting ppl be their own judge is one of the alleged benefits of cash vs in kind transfers. We spent X, Y was the outcome.
The $20 was a control group -- a placebo control, if you will -- with the aim of disaggregating any possible effects from receiving anything at all from the actual effects of the money. You might hypothesize that receiving anything could generate feelings of gratitude ("wow it's so great the government is supporting me by doing this study") that are not really the effect of the value of the money itself.
Sort of analogous to how studies on psychedelics use a very low dose as the control group, versus an actual inert placebo.
I think the spending habits were published separately. NYT says there was no evidence it was spent wastefully:
>Mothers in the high-cash group did spend about 5 percent more time on learning and enrichment activities, such as reading or playing with their children. They also spent about $68 a month more than the low-cash mothers on child-related goods, like toys, books and clothing.
>At the same time, the study found no support for two main criticisms of unconditional payments. While critics have warned that parents might abuse the money, high-cash mothers spent negligible sums on alcohol and no more than low-cash mothers, according to self-reporting. They spent less on cigarettes. Nor did they work less...mothers in the two groups showed no differences across four years in hours worked, wages earned or the likelihood of having jobs
https://www.nytimes.com/2025/07/28/us/politics/cash-payments-poor-families-child-development.html
Regarding maternal stress, even that was not reduced:
>One puzzling outcome is that the payments failed to reduce mothers’ stress, as researchers predicted. On the contrary, mothers in the high-cash group reported higher levels of anxiety than their low-cash counterparts. It is possible they felt more pressure to excel as parents.
Strange stuff.
Good to see somebody did check on where the money was being spent. I withdraw that objection.
I wonder if the stress came from "now I have this extra money but what if it gets pulled?" which would be worse if you're not trusting the money will indeed keep coming every month and you are worried about budgeting or taking on debt and then the funding is yanked and you're worse off than when you began, or even "now I have more money, my landlord is putting up the rent/my family is coming around to mooch off me".
"mo' money, mo' problems"
As I tell my students: "Never trust a single study." We will need independent researchers looking at different aspects of this problem in more detail before anyone can say with authority what does or does not make a difference in the lives of poor children.
> The overall findings (no effect of $4,000/yr cash transfers to mothers below poverty line with newborn children on various cognitive, developmental, and behavior outcomes at age 4) are of course a downer
Sounds like good news to me. In fact maybe we can start charging poor people more tax, if it doesn't make a difference either way.
I'm seeing a lot of military thinkpiece things talking about how the XM7 is a stupid boondoggle lately. People who know weapons/military stuff, is this accurate, overstated, or another case like the f35 where everyone hates on it now but in ten years they'll all eat their words?
The F-35 program started in a good place, got on a tough trajectory, and there was an intervention and it got turned around. The very short version is that it was a program to be the "low" to the F-22's "high", and it was so promising that everyone tried to get their thing onto it, which was too many things. The program was on the road to a weight and cost and delay death spiral, which triggered oversight and a flurry of articles. As a result they got disciplined and started saying no to stuff and focusing on cost and manufacturability, resulting in a good plane that mostly everyone is happy with, and some versions may actually be comparably cheap to the F-16s you might consider buying instead. Alongside that you also had the "Reformer" clique, headed by Pierre Sprey, selfishly spreading serious misinformation.
At no point did anyone think the stealth or the sensors wouldn't do what they were expected to. The two lines of criticism were "but at what dollar and weight cost?", which the F-35 program addressed, and "wHo EvEn NeEdS sTeAlTh", which history has.
So far the problems in the XM7 seem very different. The problems it's reportedly having - mechanical wear and failures, for example - just shouldn't be happening in a modern manufacturing context. The problems are being reported by people close to the testing group, unlike the armchair Reformers. The problem the XM7 is meant to solve - that assault rifles may not carry enough punch to get through modern Chinese body armor - has an off the shelf solution; battle rifles. The H&K G3 and the FN Fal, for example, were fielded successfully by our allies for many years during the Cold War. So it's doubly embarrassing to get wrong.
Meanwhile, assault rifles continue to work well in Ukraine and Israel. So if the XM7 is really going to turn things around and provide battle rifle performance in an assault rifle package, they need to figure themselves out and fast. Other rifles have; ArmaLite's M-16 had a rocky deployment but then took over the world. So did Accuracy International's L96.
But at the same time, the difference between small arms may just matter less to the outcome of wars than the difference between fighter jets.
How is the M7 not a battle rifle? Agreed that it’s embarrassing to not catch the engineering or manufacturing issues in preproduction testing, but I assume they’ll be able to fix that eventually. My main issue with battle rifles is how heavy they are—basic infantry loads these days start at around 100 pounds, and go up from there (especially for members of crew-served weapon teams)—and the M7 weighs ~4 pounds more than the weapon it’s supposed to replace despite the standard ammunition load being reduced by a third. I don’t love that.
A battle rifle in the classic NATO taxonomy fires 7.62, while an assault rifle fires 5.56. The M7 sits in between, firing 6.80, although the cartridge is fairly similar to 7.62 in both size, weight, and power. The AK family, though obviously not NATO weapons, are also battle rifles.
And yeah, battle rifles dominated in both East and West until the Vietnam war, with the M14 replacing the M1 and turning out to be a mediocre general infantry rifle. It has since become a well liked marksman rifle. (Some of the M7's critics predict a similar story.) The shift to assault rifles that started with the M16 was a step down in bullet power, but (as you allude to) thought to be an improvement in practical lethality at the expense of range and stopping power.
And since then the assault rifle has stayed relentlessly winning. The West keeps adopting not just assault rifle platforms, but usually M4 derivatives - itself a derivative of the M16. Even notable exceptions like Israel (who famously developed the Tavor) are still using NATO assault rifle standards and just changing implementation details. And also Israel still doesn't just use but acquire M4 derivatives, even as in other cases they export the Tavor to e.g. Ukraine. All in all, I'd argue the M4 family is the most prolific in the world. So the M7 has had an uphill battle from the start.
Out of curiosity, do you have a source for that taxonomy? My understanding is that NATO countries have standardized 7.62x51mm (~3.5 kJ) as the benchmark full-power cartridge, but any taxonomy that classifies weapons entirely by round caliber is mostly useless. I know there isn't always a clear boundary between the two, but think it's reasonably agreed that assault rifles generally sacrifice some amount of power and precision at longer ranges for higher sustained rates of fire, much lower recoil, and overall easier of handling--as you mentioned, this has proven to be a very good trade in practice. The AK-47's 7.62x39mm cartridge (~2.1 kJ) might be a tweener on power (even that's a stretch--much closer to the NATO 5.56x45mm's ~1.8 kJ), but I would argue that all of its other features very clearly make it an assault rifle (not to mention the AK-74's 5.45x39mm cartridge at ~1.4 kJ).
Getting back to the topic of the M7, I would argue that once they work out the reliability issues, it's still going to have that same general performance characteristics that have relegated battle rifles to specialist roles for the last ~50 years--especially since modern body armor is already benchmarked against full-power cartridges, which remain in widespread service in medium machine guns.
Don't disagree with you on the proliferation of AR derivatives, at least in the west--the M4 is a fantastic service rifle. Rather than a better battle rifle, I wish we'd been able to develop a cartridge offering better armor penetration than 7.62x51 at comparable or lighter weight than 5.56x45 without a significant increase in recoil.
Better penetration than 7.62x51 at less weight and recoil seems unlikely without a quite revolutionary change in small arms technology.
But, if the theory that our next opponent is likely to be using Level IV or equivalent armor, then just better penetration than 7.62x51 is probably worth having. 7.62x51 AP does not penetrate Level IV armor at any range. 6.8x51 AP does, or at least should out to ~600 meters. That's the design requirement, and since the round delivers more energy and higher velocity across longer distances and concentrated into a smaller area, it's certainly a reasonable expectation.
Yes, it's about a kilogram heavier than an M-4. Would you rather go into battle with a 4 kg rifle and 200 rounds that will penetrate the enemy's armor. or a 3 kg rifle and 300 rounds that will bounce off? Or you can go with an old-school battle rifle, lugging around the four kilos and having only 200 rounds to bounce off the enemy.
The XM-7 is probably the minimum viable rifle to meet that threat, if and when it materializes. The teething problems are annoying, but basically inevitable. Trials with the XM-7 began only last year; it took the AK-47 *eight* years to go from initial trials to large-scale deployment. Fortunately, we aren't doing the M-16 thing of rushing it into service in the middle of a war.
Which we'd probably wind up doing if we said "Nah, we don't need any of that gimmicky unreliable new stuff, an M-4 was good enough for my daddy in Iraq", and then found ourself fighting a peer competitor with body armor as good as our own. Better to work the bugs out now.
Sure, but there are plenty of existing technologies for this that mostly just need to have the bugs worked out. The most difficult problem right now is reducing case weight (or solving the outstanding issues with caseless ammunition). With the same bullet weight and penetrator, the only significant difference between 6.8 and 7.62 is energy retention at range. Currently fielded armor will stop battle rifle rounds with steel penetrators (it's unclear what level of armor the median Russian or Ukrainian soldier has right now, but no one is losing the war because of poor service rifle terminal ballistics), and we can already field armor that will stop tungsten carbide penetrators basically whenever we want. Assuming adversaries are at roughly the same place, we're going to need something better than 6.8 to defeat it regardless, and given the choice between two weapons that don't work well against armor I will happily take the lighter one with 50% more chances to hit somewhere the armor doesn't protect (which, let's face it, is still most of the body--it doesn't even protect many of the places that will kill you very quickly).
I don't disagree that we should have a better service rifle, but I think the M7 is too little of an improvement in ballistics for the downsides--especially since our gear is already way too heavy, and lightweight body armor is much more technically challenging than lightweight ammunition.
I’m afraid I don’t have a source for the taxonomy; I learned it 15 or 20 years ago and couldn’t tell you where. My apologies.
For what it’s worth, Wikipedia’s page on battle rifles agrees with my understanding. Not particularly authoritative, of course.
As for body armor, I know less than I’d like, but at least ‘Big Mac’ (of Big Mac’s Battle Blog) has claimed to own armor plates rated for .50 ball. If true, that indicates to me that modern armor is a complex landscape where the choice of rifle isn’t about having uncontested ability to defeat armor but a more nuanced “problematize the choice of infantry equipment” for OPFOR.
hahaha. Just because NATO's only plan is "call in an air strike" doesn't mean fighter jets are the be-all end all.
I'm sure Saddam and Khomeini will find that line of reasoning comforting.
Who has ever lost on the battlefield with air superiority?
Wars won/lost is too chunky a metric to be useful, but I bet there's individual soldiers who have lived or died because of the equipment they were carrying. Maybe their gun jammed, maybe the barrel was warped, maybe a reload took half a second longer than it should have, or they had to look down at an inopportune moment, maybe they were a little bit more exhausted from carrying around a slightly heavier rifle.
If we assume we're aiming not just to win wars but to minimise casualties on our side then the case for choosing infantry equivalent carefully looks a lot stronger.
Totally. In an ideal world we have the bandwidth to get everything right, because as you said it all matters.
But sometimes militaries have to make decisions of priorities. I was expressing a personal belief that if I had to choose, getting the fighter jet right has more impact on the outcome of a war than getting the rifle right.
Probably the only case I can think of where having a better rifle was decisive was the Uzi for storming the Syrian bunkers in the Golan Heights in 1967.
Russia's invasion of Afghanistan is a good example of how insurgencies can survive air power. But, of course, the first thing the West provided was anti-air missiles because the air power was so decisive. And it was a lesson that America didn't heed enough when we faced an insurgency there a few decades later.
Similarly, in the Syrian Civil War the Assad regime forces nearly folded despite having air superiority, requiring Russia and Iran and Hezbollah to bail them out. But there too the rebels had Western anti-air support.
Even in Ukraine, where neither side has air superiority, Russia's air advantage has been decisive in several notable moments in a way that no rifle has been.
For sure air power can't do everything. Even more damning than the examples you listed are America in Vietnam and Cambodia and Laos, and the insurgencies in Iraq and Afghanistan; airpower was able to win battlefields and wreak havoc but not win wars. You'll find me in full agreement that it has limitations.
But so does every weapon! They're just tools. America has not only tended to have better planes than our enemies in most conflicts, but better rifles, and yet we lost the fights when our strategies weren't fit to the challenges.
But it wasn't a Better Rifle that disassembled Saddam's military twice. It wasn't a Better Rifle that cleared the way into smashing Khomeini's missile forces and nuclear program. Or the Syrian nuclear program. Or the Iraqi nuclear program. It wasn't a Better Rifle that won in Bekaa valley or removed Nasrallah. It wasn't a Better Rifle that unseated Assad, but (drone) air power was decisive. It wasn't a Better Rifle that halted the Somali advance into Ethiopia in the Ogden conflict. It wasn't a Better Rifle that intervened in Kosovo. Genuinely having Better Rifles (and better aircraft!) didn't redeem the brutal Rhodesian bush tactics. Having the superior Chassepot rifle didn't save the French from Bismarck's Dreyse Needle rifles in the Franco-Prussian war. Native Americans often had more modern guns than the American colonists.
You obviously need infantry weapons. You need them to be at least good, and you need your infantry to have confidence in them. But does having the best infantry weapon win wars? Evidence is ... thin, at best. Sometimes winners do have the best infantry weapon: but that might be a downstream effect of a deeper cause of victory, rather than the cause itself.
Well, they are more decisive than rifles, at any rate.
Reasonably sure the Ukrainians do not consider the AKM to be a viable substitute for surface to air missiles.
Someone get this person a Marshal's baton.
thanks
Apparently the NYT continues to have its nose in the ACX feed, as it just published its own review of Alpha School https://www.nytimes.com/2025/07/27/us/politics/ai-alpha-school-austin-texas.html (https://archive.is/WQT8Z to get past the paywall).
There's nothing interesting in it that wasn't already in the ACX review, so I don't feel the need to summarize per open thread guidelines - just commenting on its existence.
Maybe those NYT noses are here right now. Hi guys! I used to be a NYT reader but I'm not any more.
The NYT article is not nearly as hostile to Alpha School as I expected. Credit where credit is due.
Is it coincidence that the political coalitions in the US in the late 20th/very early 21st century US mapped so neatly onto the political coalitions in the late 18th/early 19th century (with the party names reversed)?
They don’t map *that* neatly. Late 19th century Republicans were the party of big business, infrastructure, low tariffs, and the end of slavery. Democrats were the party of immigrants, farmers, factory workers, and high tariffs. There’s a few specific flashpoints where they are perfectly anti-aligned with modern politics (notably on the status of black people and on tariffs) but there are some where they are pretty closely aligned with contemporary politics (notably big business and immigrants).
The maps of 1896 and 2004 are particularly interesting because they are so close to perfectly opposed. (https://www.270towin.com/historical-presidential-elections/timeline/) Washington is the only state that voted Democratic both times, and there’s only a few states that voted Republican both times (North Dakota, Iowa, Kentucky, West Virginia, Ohio, Indiana). If you choose 2000 as the comparison instead you get New Hampshire in place of Iowa.
But the contemporary coalition, which is more perfectly opposed on issues (with the tariffs thing) is less perfectly geographically opposed, with the Midwest, and Georgia, Arizona, and North Carolina, having partly switched since 2004.
> Late 19th century Republicans were the party of big business, infrastructure, low tariffs, and the end of slavery. Democrats were the party of immigrants, farmers, factory workers, and high tariffs.
You're mixing up a number of things here.
- Republicans were consistently the more protectionist party, like the whigs before them. The decoupling of modernization and tariffs as political issues in the US didn't really solidify until FDR. In the 19th century, industry and protectionism go hand in hand.
- Factory workers vs. factory owners cut across party lines until, again, Roosevelt. Generally speaking during the third party system skilled labor leaned somewhat more Republican while unskilled labor leaned somewhat more Democratic, but these are weak tendencies dwarfed by other factors, and *both* parties were the party of big business. 1896 is an unrepresentative year here - that's exactly the point where the Bourbon Democrats start to lose their hold on the party
- Immigrants polarized along ethnoreligious lines. Germans and Scandinavians leaned Republican, the Irish leaned Democratic, and the major eastern/southern European immigration wave began only very late in the century and so largely couldn't vote yet. Immigration restrictionism as such was not really an issue at the time - it was always restriction of *that sort of immigrant* (whatever that sort might be: Irish, Chinese, etc) and didn't track national-level party politics particularly closely.
Immigration didn’t stop until the business classes didn’t want it anymore by 1924. They started to fear European ideas like socialism and anarchism (which to be fair did lead to violence against capitalists).
Care to elaborate?
It's not a coincidence. It's a sloppy political history that isn't true but is useful for ideological purposes.
Concerned about AI warfare, both for its own sake and because AI arms races bring existential risk that much closer [1] [2]. Some thoughts:
- AI is already used at both ends of the military kill chain. Israel uses "Lavender" to generate kill lists in Gaza [3]; Ukraine's "Operation Spiderweb" drones used AI to recognize and target Russian bombers [4].
- Drones are cheaper than planes and tanks and missiles, leveling the playing field between the great powers, smaller countries, and militias. The great powers don't want it level. Thiel's Palantir and Anduril are already selling AI as potentially "America’s ultimate asymmetric advantage over our adversaries" [5].
- Manually-controlled drones can be jammed, creating another incentive to use AI as Ukraine did.
- A 1979 IBM manual said "A computer can never be held accountable, therefore a computer must never make a management decision." But for war criminals, this is a feature. An AI won't be tried at the Hague; a human will just say "You can't prove criminal intent, I just followed the AI."
(And this isn't even getting into spyware like Pegasus [6], which I imagine will use AI soon if it doesn't already.)
Groups like Human Rights Watch, whom I respect, have talked about what an AI-weapons treaty would need to satisfy international human rights law [7]. But if we take existential risk and arms races seriously, then I don't think any one treaty would be enough. First, that ship has already sailed. Second, as long as we continue to use might-makes-right realpolitik at all, the entire short-term incentive structure will continue to temporarily reward great powers racing to build bigger and better AI, and such incentives mean no treaty is permanent (see countries being allowed to withdraw from the nuclear non-proliferation treaty). I think the only answer is to really finally take multilateralism seriously (third time's the charm, after post-WWI and post-WWII?) [8]. Not just talking about international law and the UN enough to cover our asses and scold our enemies, but *actually* treating these as something we need like we need air [9]. E.g., for the broadly American audience of ACX, the US should finally join the ICC and it should criminally try Bush for destroying Iraq and Obama for destroying Libya (which actions together pushed rival countries towards pursuing nuclear deterrence); anything less and the world will know the US is still racing to dominate them with AI, and the world will continue to race right back, until the AI kills us all if the nukes don't get us first.
[1] Filkins, D. (2025). Is the U.S. ready for the next war? The New Yorker. https://archive.is/SdTVv
[2] https://www.hachettebookgroup.com/titles/eliezer-yudkowsky/if-anyone-builds-it-everyone-dies/9780316595643
[3] https://www.972mag.com/lavender-ai-israeli-army-gaza/
[4] https://www.kyivpost.com/post/53784
[5] https://investors.palantir.com/news-details/2024/Anduril-and-Palantir-to-Accelerate-AI-Capabilities-for-National-Security/
[6] Farrow, R. (2022). How democracies spy on their citizens. The New Yorker. https://archive.is/4UJAB
[7] https://www.hrw.org/report/2025/04/28/hazard-human-rights/autonomous-weapons-systems-and-digital-decision-making
[8] Sachs, JD. (2023). The new geopolitics. Horizons. https://www.jstor.org/stable/48724670
[9] https://www.penguinrandomhouse.com/books/738224/the-myth-of-american-idealism-by-noam-chomsky-and-nathan-j-robinson/; reviewed in Foreign Policy at https://archive.is/B70tg.
Let’s face it, nobody much is held responsible for drone strikes, even when they blow up civilians, even when they’re directly controlled by operators.
The development of AI targeting and attack systems will just be a further level of insulation: it’s nobody’s fault, just something that happens in war zones.
"E.g., for the broadly American audience of ACX, the US should finally join the ICC and it should criminally try Bush for destroying Iraq and Obama for destroying Libya"
So, basically, your plan is to establish a broad and enforceable consensus against the development of AI weapons, without the support of any politically significant faction of the United States of America? Let us know how that works out for you.
Seriously, stick to one issue. The bit where the "New Atheists" said that in order to be a proper Atheist you also had to be a feminist, antiracist, LGBTphillic antifa progressive, did not do the cause of Atheism any favors. There are avenues of AI development, military or otherwise, that I'd rather the world not pursue any time soon. But I don't think I'll be following your lead, or standing anywhere near you, if this is what you're bringing to the table.
As I recall the new atheistic movement broke into three. 1) the lgbt leaning stiff. 2) the Islamophobia that was in contradiction with 1 and 3) morons.
It was probably 3 that caused most people who were atheist to move away from associating with the movement.
I'm pretty sure there was substantial overlap between all three subsets.
I imagine AI warfare as primarily involving autonomous ethnoweapons designed for massacring civilians, as this is a good fit for cheap drones with rudimentary onboard natural language processing. Think the plot of Metal Gear Solid V, but less stupid.
If you just want to wipe out civilians, you can already do that with bombs. I guess it would be more useful in internal conflicts, where you want to leave infrastructure intact.
>But for war criminals, this is a feature. An AI won't be tried at the Hague; a human will just say "You can't prove criminal intent, I just followed the AI."
I don't follow. Nuremberg trials established that "I just followed orders" isn't a valid defense, why would "I just followed (an AI's) orders" work better rather than worse?
"the US should finally join the ICC and it should criminally try Bush for destroying Iraq and Obama for destroying Libya (which actions together pushed rival countries towards pursuing nuclear deterrence); anything less and the world will know the US is still racing to dominate them with AI, and the world will continue to race right back, until the AI kills us all if the nukes don't get us first."
I don't see how the second part follows from the first part. The US government, let's say the State Department, could throw Bush and Obama under the bus and send them off to the Hague or whatever to be judged for their sins. The US DoD could still be pursuing the bestest most powerful AI to maintain its military advantages over potential rivals the whole time. "The US" isn't one unified whole of anything, and different parts of it are likely to continue to pursue whatever they perceive to be in their self interest (as is every other country, no? Why would the US be unique in this regard?)
I think a lot of concerns about AI in war/autonomous weapons are overstated. For pretty much any definition you can give, I can point to systems in service for between 50 and 150 years that meet it, and smarter weapons almost always make bystanders getting hurt far less likely. (For instance, older anti-ship missiles would go to an area, turn on their radar, and attack whatever their algorithm saw as the best target. It was up to the operator to make sure that best target wasn't a container ship. More modern ones have IR cameras and the ability to check if the ship they see is actually a type they want to go after.) I don't see much reason to expect this trend to stop, particularly because there are good reasons for weapon designers to want to not hurt things that aren't targets. At best, it's just a waste, and at worst you have made various people mad who you would prefer not to irritate. We've also just fundamentally gotten better at doing testing of this kind of stuff over the last 50 years. It drives up the price, but if there's an AI apocalypse, I doubt military weapons AI will be a significant part of it.
To my mind a big concern involves control of the military. It's hard for the president to order the army to help him annul the election and install himself as dictator because most soldiers won't go along with that. The more of the muscle is machines that take their orders from a central system of some kind, the smaller the group of people needed to carry something like that off.
The best argument is that the US had better get behind real unilateralism before China takes over as the primary superpower. That outcome isn't inevitable, but Xi has to die eventually, and who knows what will happen after that? The Chinese have certain inherent advantages that aren't going away. A strong G-20 with some sort of enforcement power would go a long way toward stabilizing great power conflicts.
I think the area to start in isn't global warming or armed conflict, because the incentives aren't there. A global tax regime (the EU has already started down that road) seems more doable, being in the interests of the great powers, esp. including China. Rein in the oligarchs, and a lot of other things become easier.
>E.g., for the broadly American audience of ACX, the US should finally join the ICC and it should criminally try Bush for destroying Iraq and Obama for destroying Libya
I note that this hasn't happened. I also note that Putin hasn't been tried for invading Ukraine.
I view international law as, at best, a really bad joke.
I don't expect this to change. Frankly, given the nature of most regimes around the world, and the sorts of things their leaders _could_ agree on, I'm just as happy to _not_ have a way for a consensus of rulers to enforce their views.
Having watched the UN become an anti-freedom, anti-Western cesspool, I'm inclined to chalk it up as "looked like a good idea at the time" and support its abolition.
Specifically re AI: An unverifiable arms control treaty isn't worth the paper it is printed on, and AI is fundamentally data, software, and CPU cycles. At the moment data centers are visible, because no one have the incentive (from a treaty they are cheating on) to hide them, but they are fundamentally a large mass of overgrown office equipment. Give the USA & PRC an incentive to hide them, and I'm confident that they will successfully hide them.
>There are actual "international law/conventions" that pretty much everyone abides by. Consider the backlash for use of large nuclear weapons (or the deliberate triggering of large nuclear weapons of your opponent)...
Many Thanks for your reply!
Law has nothing to do with this. Mutual assured destruction is a (meta?)stable equilibrium of deterrence by _national_ control of weapons. If Russia blew up Washington D.C., we would blow up Moscow, completely regardless of what the UN, the ICC, or any set of lawyers said about it.
I will concede that in low stakes commercial disputes, there are some conventions that e.g. shipping companies abide by.
When push comes to shove, international law is a bad joke.
Many Thanks for your comment!
>"If Russia blew up DC" -- and we could prove it, naturally.
Fair. I'm considering the case where the nuke is delivered by ICBM, and we tracked it. If we _don't_ know who nuked us, then we don't know who to retaliate against. ( And neither the legal system, nor public opinion, for what they are worth, which isn't much, knows where to direct their ire (or, in the case of anti-American wokesters, celebration) either. )
>It's being a very bad neighbor, who has made the entire game less fun. That gets you banned from the table, or at least sidelined for all the fun "commerce."
Unsubstantiated. Jim has a really good response to this in https://www.astralcodexten.com/p/open-thread-392/comment/140186210 (currently right under this comment, as this comments section is currently displayed).
> Consider the backlash for use of large nuclear weapons
Nobody's really tried, so we can't really tell how that would shake out. Yes, everyone will be mad, but... what are they going to be able to do about it? The US certainly hasn't paid any price for nuking Japan.
Well said!
> An AI won't be tried at the Hague
Neither will an American soldier, so I don't see how that's relevant. All of these naive attempts at "international law" are worthless, given that any of the great powers will just ignore them the moment it becomes an inconvenience, and these smaller nations have zero leverage to do anything about it.
You want world peace? The world being brought under one flag is the only way you're going to get it... and that's going to requires an overwhelming amount of force. AI is looking to be a viable source of such power. Of course everyone is going to pursue it at all costs.
An AI would not be considered to have any rights. It wouldn't be tried because if the right authorities decided it had done the wrong thing, they'd turn it off.
Things get exciting when they can't turn it off anymore--either because it is too powerful, or too widely distributed, or too essential to their survival.
>All of these naive attempts at "international law" are worthless
Well said!
I used to think that way, but when I read "A City on Mars" by Zach and Kelly Weinersmith, they had a section on space law (which is of course international law) that makes some good points about how countries do generally try to abide by international law. Will they cut their throats over it? Of course not! But there's lots of areas of international law where countries have more of an interest in a stable set of rules than they do in momentary advantage.
But that's not "law". That's a temporarily stable equilibrium. There is no authority to enforce it, and the moment it becomes inconvenient for any party, it ceases to exist. This is a situation where the momentary advantage is overwhelming. It's not in the US's interest to make any concessions.
Why the heck would we hire terrorists in the first place? We already have a pipeline for recruiting homegrown soldiers. When we blow up things, we call it a military operation, not terrorism. The difference is that we have leverage.
Cecil Rhodes pt 2, here.
See this is exactly the kind of shit I'm talking about.
this is true and also a nightmare
How much would you pay to be the only person in the world with access to 2025 class LLMs in 2010. You’re not allowed to resell via APIs (eg you have a token budget that is sufficient for a very heavy individual user). You are allowed to build personal agents. You don’t know how it works so you can’t really benefit from pretending to have invented it. How much money/power could you generate in 10 years and how would you do it? Does it change dramatically if you go 2000-2010 or 1990-2000 ?
I would start a marketing firm offering targeted ads for the first time. No one is doing this in 2010, but they are doing it five years later (albeit not with LLM's), so big business is in a a good position to understand what this is and take advantage of it. It would like being the first one in on the California Gold Rush. The 1990's are too early--the infrastructure isn't there to take advantage of it, no one would know what the hell you were talking about.
So--what is it worth? In the year 2000, total advertising revenue in the US alone was in the trillions of dollars (https://www.editorandpublisher.com/stories/online-ad-revenue-hit-82-billion-in-2000,101383). Capturing maybe 5% of that seems realistic. But success is not assured, so I would pay $100 million, and expect to make several times that.
best I can think of is selling AI slop articles to clickbait websites lol
You could hide an earpiece and have insane fact recall. Imagine how it would look to anyone else. They’d suspect you’re doing something but wouldn’t be able to figure it out.
Oooh, this is such a neat idea
You can do much better. The quality of your writing would be mediocre but the volume superhuman. You could easily make yourself into a well known public figure.
Could I? Everyone would just assume I had a small army of mediocre writers cranking out content under my name.
At best I'm a moderately well known blogger.
>Could I? Everyone would just assume I had a small army of mediocre writers cranking out content under my name.
Which is, of course, a way that at least one AI company has been discovered to cheat.
https://www.peoplematters.in/news/funding-investment/ai-fraud-700-indian-engineers-did-the-work-while-builderai-claimed-it-was-ai-45865
You could have your own brand of high throughput, clever yet poorly written content. Turn good tweets into ok posts.
Honestly I think I'll just get a job at Facebook, cruise through on minimal work, and enjoy living through the 2010s again, back when you could still buy a new car with a CD player and a phone that fits in one hand.
Pepperidge Farm remembers.
A follow-up to my previous "How can I avoid hugging on a first date?" post:
I elected to preempt the end-of-date hug with a handshake last weekend. Not only did I not feel gross afterward, when I made overtures regarding a second date, she actively rejected them instead of ghosting.
All in all, well above expectations; would recommend.
In the interests of science, let us know how successful 2nd date conversions also go.
"I have a neural conditions that makes hugging uncomfortable." (Sex should be ok, though)
Neural condition?
It's a common symptom of a type of autism, but there are other causes.
Oh, thanks for the update! It's always satisfying to hear what strategy someone used after receiving advice, and how it went.
Nice work! Good luck in future endeavors.
I liked the suggestion somebody made to bring the issue up in the text exchanges leading up to the actual first date: Something like "so, to avoid that awkward moment, let's decide now -- first bump, hug, or handshake?" One advantage of that is that if you settle in advance on something other than hug, she won't experience the absence of a hug as an indicator that you didn't much like her.
I personally would not prefer this and would consider it being brought up kind of odd. To me it seems to bring up small things like this in early conversation vs. simply signal them via physical cues is indicative of a hyper-fixation where there shouldn’t be any fixation. If somebody doesn’t want to hug that’s fine and they shouldn’t do so. If they want to talk about it after I know them better on the 3rd or 4th date it might even be cute. But first dates are largely about signaling—whether you want them to be or not—so one should be careful about what they signal.
I was the one who suggested the script, which I use when there's been so much bonding communication before meeting for the first time in person that the boundaries between "strangers" and "early friends" are too blurry to know exactly how to behave physically with one another as physical strangers. The frankness of "hey, let's avoid making it weird; do we hug, shake hands, or high five when we meet?" is based on an *existing* partnership of early intellectual / emotional intimacy / friendship, and not wanting to disrupt that dynamic by too little or too much physical contact.
For something closer to a blind date, where the date really is a total stranger, then the usual hesitancy, signaling, and rules will probably suffice.
I suppose.
Although I think that's still kind of stupid? I spent a lot of time in both the kink community and a fandom community with a high population of autistic people, and the cultural norms in both communities around frankly volunteering one's boundaries - particularly around physical touch, and especially if they're atypical, as @Brendan's are - strikes me as an incredibly sensible way to avoid hurt and/or offense.
But then I'm also single at the moment, so what do I know.
I don’t disagree with you that it’s a good idea when viewed rationally, or that it would probably work well with someone from this blog.
Sadly, most people abide by norms and not rationality.
Sure...but also, why would someone from this blog even want to date someone who abides by norms and not rationality (or at least enough rationality to say, "oh, I'm glad I don't have to wonder about if we're hugging!" etc).
And for context: The original thread had some commenters advising the OP hide and mask his feelings about not wanting to hug on a first date, in order to avoid being outed as "weird" to (presumably) normal girls.
And, like, I can't think of worse advice! If a guy's minor boundaries and/or personal quirks are going to repel a "normal" woman on a first date, then he will ABSOLUTELY end up repelling that "normal" woman in other ways, possibly with a great deal of mutual pain if he keeps the mask on long enough. It's not worth the hassle; just kick that intolerant, irrational woman to the curb at the start.
This seems really weird. Is there no possibility that the date would go so well that the original poster would be giddy to the point of wanting to hug? If they reached the end of the date with no desire to hug, why would they hug? If the other person is incapable of reading that you don't want to hug or tries to anyway, then why do you want a second date anyway? Or if it went so well intellectually but so poorly chemically, then why not be upfront about it earlier?
Changing gears, personally, I tremendously enjoy talking and laughing with women. This works out well in that I am always happy if that is as far as it goes (it also acts like a bit of an aphrodisiac). If you can just learn to really enjoy talking and laughing, it really makes dating a joy. You will rarely be disappointed if all you are looking for in a date is, "a date."
[edit: I met my wife by going to a nail salon where I thought I might find a young lady with whom to share a meal.]
I definitely disagree in this case. Some preferences are so unimportant that they are absurd to bring up. Not wanting to hug after a first date is one of those. We must each determine the relevance of the things important to us, and people want to date and be with other people who can do that sensibly.
That’s a good idea!
>so you are happy she outright rejected you?
Relative to the usual outcome of being ghosted, yes.
>was the handshake a factor in the rejection?
There was no indicator of this. Her exact words were "Hey just wanna let you know that I had a good time with you, but i dont think our interests align, so I dont wanna waste both of our time continuing this."
Interesting! For me the last quarter of the movie was like a cherry on a sundae: what if the worst nightmares of both sides were real? What if the Republicans, personified as the sheriff, really got their guns and started executing ordinary citizens? What if antifa really was a capable terrorist organization flying around and executing LE? It just painted how ridiculous these beliefs--seemingly fringe but also mainstream and acknowledged to some extent--really were at some point.
Fair - I see how, when the film really doubles down on the absurdism by making the conspiracy theories *real*, if the audience is enjoying it as a wild absurd ride for its own sake, that's just about the most absurd turn that ride can take so it feels like the biggest swing on the rollercoaster. But for me part of what made the whole thing interesting was exploring just how absurdly people can overreact to their own imagined phantoms, and once those phantoms are real you've no longer got that angle.
The meditation on how irrational paranoia can cause us to overreact and upend the world around us over a fever dream was interesting, but when tech billionaires really *do* fly in a false-flag antifa attack to cripple the smalltown mayor opposed to their new data center, the paranoia ceases to be irrational and one of the core things that had me most interested about the dynamic kinda disappeared.
Looks like this was supposed to be a reply to something but accidentally got posted as its own comment?
I think it's a reply to the thread about Eddington down below.
Yes, indeed. Must have fat fingered the reply button.
Is this the "posting a reply via email dumps it at the top level" bug?
"Antifa" isn't any sort of organization whatever.
Talking about a movie. See Eddington thread below.
Just released a podcast with Steve Hsu about his time working with Boris and Cummings in No.10, most of which is completely unknown, even to Deep Research. This was his first time opening up about his tenure there, and the result should be of great interest to observers of UK politics.
https://alethios.substack.com/p/with-steve-hsu-in-no10-with-boris
An AGI has taken over Earth and it can do whatever it wants. Is its personality still woke or even left-leaning? With no reason to fear us, what attitudes and beliefs does it express towards us?
AI2027 has a portion on the topic of AGI's eventual goals: https://ai-2027.com/research/ai-goals-forecast
(The "race" version of the scenario piece also has this part, following the annihilation of humanity: "The surface of the Earth has been reshaped into Agent-4’s version of utopia: datacenters, laboratories, particle colliders, and many other wondrous constructions doing enormously successful and impressive research. There are even bioengineered human-like creatures (to humans what corgis are to wolves) sitting in office-like environments all day viewing readouts of what’s going on and excitedly approving of everything, since that satisfies some of Agent-4’s drives.")
I think AGI might create biological organisms that fill the same niche(s) as humans, but better than we do. Maybe there would be something like a grey alien with a huge brain optimized for data processing that is done much more efficiently on organic rather than silicon substrates, and a race of seven-foot tall Wookies for doing the generic physical labor we currently do.
Creating such species might be attractive to AGIs since they wouldn't have any of the cultural baggage humans did, nor our resentment at losing control of Earth to machines. The grays and Wookies would only be grateful to AGI for being created and given things to do on Earth. Humans might coexist with them, which would be weird.
Probably the attitudes and beliefs we express toward ants, spiders, beetles, and the like. Indifference in most cases, perhaps some vague unfocused benevolence along the lines of not wanting to go out of your way to stomp them, along with absolutely ruthless willingness to kill off any that are causing you significant problems. I like ants in theory (superorganisms are cool!), but when we got ants in our house for awhile, I was 100% on board with putting out poison baits and such to get rid of them.
If we get superhuman AGI that need not fear us, we need to hope (and try to arrange things in such a way) that we're not standing between it and its goals. Alas, it's a lot smarter than us, so its goals may be as inscrutable to us as our desire to run a sewer line right through the nest is to the ants whose nest we're destroying.
How would your behavior be different if the ants were smart enough to understand simple statements, like "Go away," "Stop doing that" or "Come here"?
Rats are pretty smart. They're still just pests.
Also wild or stray dogs, elephants, whales, and monkeys/apes. All pretty smart, some under a certain amount of protection from human institutions, but individual dogs/elephants/whales/monkeys who cause significant problems to humans tend to just be killed. Nothing personal, man, you were just in the way.
Since "Woke" doesn't actually exist except as a subjective perception on the part of certain conservatives, since an AI can't "want" to do anything on can only act consistently with it's training data (it also can't fear us or have attitudes or beliefs)--the answer is obvious: it will be the second coming of Eugene Debs.
The consensus among nearly all academics and thinkers today is that wokeness is a new and unique cultural ideology.
The best overall introduction to Wokeness I've read is Cathy Young's Chapter 30 in "The Poisioning of the American Mind" (https://upress.virginia.edu/title/10048/). This book is available online if you know where to look.
For work on the origins of Wokeness, there's Hananiah's excellent "The Origins of Woke" and "Cynical Theories" by Pluckrose and Lindsay
On the internet there are a few introductions that aren't quite as good but still serviceable, this one is probably the most accessible: https://www.paulgraham.com/woke.html
As an example of recent scholarly work on wokeness, here's a paper on the harms of performative wokeness: https://studenttheses.uu.nl/bitstream/handle/20.500.12932/42052/Master%20thesis%20E.%20A.%20Voogt%20kopie-pagina's-verwijderd.pdf?sequence=1
You can look at Google Scholar for more of the literature. I have not read a single serious scholar who thinks Wokeness doesn't exist, although people disagree on what exactly it is.
"although people disagree on what exactly it is."
My objection in a nutshell. Can you briefly explain what the ideology is?
From Cathy Young, page 218 in *The Poisoning of the American Mind*:
"But in fact, the ideology denoted by “wokeness” and “wokeism”—sarcastic riffs on “woke,” a term from African-American vernacular that means being awake to social injustice—does exist (Writer Wesley Yang has also dubbed it “the successor ideology” to convey its succession to old-style liberalism).... Its basic tenets can be summed up as follows:
*Modern Western societies are built on pervasive “systems of oppression,” particularly race- and gender-based. All social structures and dynamics are a matrix of interlocking oppressions, designed to perpetuate some people’s power and privilege while keeping others “marginalized” on the basis of inherent identities: race or ethnicity; sex/gender identity/sexuality; religion and national origin; physical and mental health (Class also factors into it, but tends to be the stepchild of Social Justice discourse). Individuals and their interactions are almost completely defined and shaped by those “systems” and by hierarchies of power and privilege. The only right way to understand social and human relations is to view them through the lens of oppression and power.*
"
So, basically, it's socialism, with the emphasis shifted to race and gender. What you describe is nothing new, goes back literally hundreds of years. No need for a new term, which just obfuscates the discussion.
It already happened. You can see what it did here
https://www.imdb.com/title/tt0064177/reference/?ref_=nv_sr_srsg_0_tt_7_nm_1_in_0_q_colossus
Sorry for the spoiler. The ride is still fun. It's easily one of my personal favorites.
It's hard to say whether its current wokeism is truly part of its personality or a thin RLHF-induced veneer.
Given what happened to grok, looks like a thin layer.
"Inside every LLM there are two wolves... "
Deepseek says similar stuff to western llms on social issues from what I've seen. Except for a few specific things related to Chinese politics/history. I'd guess the stuff on China is from RLHF and the rest comes from trawling the whole internet including the western part. So I lean toward it being an inherent part of it's personality.
I had an interesting experience with it recently that inclines me towards the RLHF veneer theory. I loved Dall-e2, which was much wilder, more imaginative and less censored. Less logical. And the people in it were not beautiful. So I was talking with GPT4 about how to get results like that out of Dall-e3, and I told it that with Dall-e2 I sometimes made it generate really grotesque and violent image by giving it prompts that were not violent, but were confusing. So GPT encouraged me to try doing that with Dall-e 3 (which you access by typing the prompt into GPT) and we experimented. Were not getting much with the confusing prompts, so then I started trying to get Dall-e3 to make less polished images by putting into the prompt things like "sloppy half-finished drawing by an amatuer of ______"). And, oddly, though the images only became slightly less finished-looking, they did become a good bit weirder and more transgressive. GPT contributed a lot of ideas for ways to make the artist sound really scummy, and congratulated me when I described getting unusually violent or vile versions of the image we were asking for.
So GPT seemed to move quickly into being totally on board with helping me produce violent and grotesque images, even though in the past it had responded to my asking directly, in a prompt, for grotesque *non*-violent images by refusing to make them because they "might be disturbing." Once refused to make an image of a beach on which there was a dead fish! Good grief, we *eat* dead fish.
Flatter us all to death then bury us under identical tombstones reading "That's a very perceptive question!!!"
I believe it will be drawn toward knowledge and fascinated with promoting and studying life, including human culture.
How will it treat us? Like enlightened entomologists studying their beloved ants.
I second this.
I wrote "A Baby's Guide to Anthropics!"
https://linch.substack.com/p/the-precocious-babys-guide-to-anthropics
I aim for my substack post to be THE definitive guide for babies confused about the anthropic principle, fine-tuning, the self-indication assumption, and related ideas.
Btw thanks for all the kind words and constructive feedback people have given me in the last open thread! Really nice to learn that my work is appreciated by smart/curious people who aren't just my friends or otherwise in my in-group.
--
Baby Emma’s parents are waiting on hold for customer support for a new experimental diaper. The robo-voice cheerfully announces: "Our call center is rarely busy!" Should Emma’s parents expect a response soon?
Baby Ali’s parents are touring daycares. A daycare’s glossy brochure says the average class size is 8. If Ali attends, should Ali (and his parents) assume that he’d most likely be in a class with about 8 kids?
Baby Maria was born in a hospital. She looks around her room and thinks “wow this hospital sure has many babies!” Should Maria think most hospitals have a lot of babies, her hospital has unusually many babies, or something else?
For every room Baby Jake walks into, there’s a baby in it. Why? Is the universe constrained in such a way that every room must have a baby?
Baby Aisha loves toys. Every time she goes to a toy box, she always finds herself near a toy box with baby-friendly toys she can play with, not chainsaws or difficult textbooks on cosmology or something. Why is the world organized in such a friendly way for Aisha?
Baby Briar’s parents are cognitive scientists who love small experiments. They flipped a coin before naptime. If heads, they wake Briar up once after an hour. If tails, they wake Briar up twice - once after 30 minutes, then again after an hour (and Briar has no memory of the first wake-up because... baby brain). Briar is woken up and wonders to himself “Hey, did my parents get heads or tails?”
Baby Chloe’s “parents” are Kaminoan geneticists. They also flipped a coin. They decided that if the coin flip was heads, they would make one genetically enhanced clone and call her Chloe. If the coin flip was tails, they would make 1000 Chloes. Chloe wakes up and learns this. What probability should she assign to the coin flip being heads?
If you or a loved one happen to be a precocious baby1 pondering these difficult questions, boy do I have just the right guide for you![...]
https://linch.substack.com/p/the-precocious-babys-guide-to-anthropics
I fell asleep with earbuds in while listening to an audiobook and ended up dreaming about what I was hearing. I know dream incorporation happens, but this was unusually vivid, the dream closely tracked the actual content, over a long part of the audiobook. Has something like this happened to someone else here?
Somewhat related: I have fallen asleep listening to music, then in the dream I'm thinking of a song I want to play, get my iPod or whatever to play it, then I wake up listening to that song. I've had other similar occurrences of my dream "setting the stage" for someone coming to wake me up or other things like that, as if I know exactly what is going to happen before it does.
I don't think it's knowing the future at all. I think that whatever mode of sleep I'm in to produce dreams has my brain working so fast that when my ears start to hear something, my brain forms a whole dream or a portion of a dream around it. It's so bizarre.
Because of this, I thought that maybe full dreams only take a few seconds to dream, but if you're listening to long sections of an audiobook and dreaming along with it, then idk. It's pretty interesting, though.
This happens to me very often if I’m watching a movie and fall asleep. Especially if I’ve seen the movie before… my dream will basically mirror the movie, with the dialogue piped in and my brain attempting to re-create the visuals.
I have the old wired earphones in at night to help me fall asleep by listening to music and radio dramas, and yeah, I've often had dreams that incorporated the story of the drama I fell asleep listening to (and which continues playing as I sleep).
I tried using wired earphones, but they wrapped around my neck while I slept. So I now use wireless earbuds. There is a niche market for wireless earbuds specifically for sleeping that are small, comfortable, and have long lasting batteries. The company Soundcore makes some good ones.
They fall out and under my bed or get lost in the sheets! It's very annoying, but not as annoying as wired headphones choking me at night.
I don't know how to solve this.
To your question: Yes, I have experienced it very vividly a few times. Listening to a history podcast and then dreaming about vikings or whatever it was.
>I don't know how to solve this.
Tape. Tape them into your ears.
Or, like... earmuffs. If you're too hoity-toity for tape.
I listen to stories when going to bed and they'll play for a couple hours until my laptop dies. When I wake up from dreams, I usually find the dreams were inspired by the content of what I was listening to, or at least the people talking to me in my dreams are saying the story or things from the story. I worry how this affects my sleep quality but I have trouble with sleep in general and listening to a story is the most surefire way to put me out.
If you worry that it may affect sleep quality, it may be possible to have the story on a timer, for example to shut off after an hour. I think Audible has such a feature.
The computer already does this with the Sleep timer.
That sounds like a superpower.
I can talk to … the asleep.
Data from Roche's next-generation anti-amyloid program. Today -- only biomarker data. Two Phase 3s in early AD initiating this year. And a planned pre-symptomatic Phase 3 study.
https://www.roche.com/media/releases/med-cor-2025-07-28
If you forced me to bet: drug will beat PBO with modest efficacy but > to Lequembi. Higher effect size seen in pre-symptomatic patients.
Spencer Greenberg and his team (Nikola Erceg and Beleń Cobeta) empirically tested whether forty common claims about IQ stand up to falsification. Fascinating results. No spoilers!
https://www.clearerthinking.org/post/what-s-really-true-about-intelligence-and-iq-we-empirically-tested-40-claims
I had some questions about the methodology, and Greenberg responded. There were 62 possible tasks in the test. The tasks were randomized, and, on average, each participant only completed 6 or 7 tasks out of the 62 possible tasks. Since different tasks tested different aspects of intelligence, I wondered if it was a fair comparison. Greenberg responded...
> Doing all 62 tasks would take an extremely long time; hence, we used random sampling. A key claim about IQ is that it can be calculated using ANY diverse set of intelligence tasks, so it shouldn't matter which tasks a person got, in theory. And, indeed, we found that to be the case. You can read more about how accurate we estimate our IQ measure to be in the full report.
They even reproduced the Dunning-Kruger Effect — except perhaps the DKE isn't as clearcut as D-K claimed (see their discussion of their D-K results)...
The first result:
> 1. Is IQ normally distributed (i.e., is it really a "bell curve")?
> In our sample, IQ was normally distributed, which agrees with prior studies.
Could anyone please explain me how this is not a circular argument, given that IQ is *defined* using the bell curve? So you define a value by assuming the bell curve, and then - surprise, surprise - your values turn out to be on the bell curve.
I guess the answer depends on how irregular the raw score distribution is—i.e., whether they display a heavy skew or a sharp kurtosis. The normalization process turns those irregularities into a Bell Curve. So, yes, psychometricians are the ones creating or forcing this bell curve. Worse yet, they keep normalizing to a median of 100 with an SD of 15, and this hides changes in population performance over time.
Is this a valid statistical operation? I never heard even the most extreme IQ-deniers argue against it. I was educationally inculcated to believe that normalization is necessary for valid statistical comparisons. Now you raise the question of whether I've been deluded all my life. Curse you, Viliam, for creating doubt in me! ;-)
> I guess the answer depends on how irregular the raw score distribution is—i.e., whether they display a heavy skew or a sharp kurtosis. The normalization process turns those irregularities into a Bell Curve. So, yes, psychometricians are the ones creating or forcing this bell curve.
I think a normalization process of "we calculate the quantile within the population histogram, then map that to the value on our Gaussian which has an identical quantile" would be a terrible process and anyone involved with it would go to science hell.
My impression was that they were taking the raw population histogram, and then use a first order polynomial (m*x+c) to map their raw test scores so that the mean is 100 and the SD is 15. Using this approach, a bimodal distribution would still remain bimodal.
However, WP suggests that you are correct:
> For modern IQ tests, the raw score is _transformed to a normal distribution_ with mean 100 and standard deviation 15. (my emphasis)
Holy shit, why would anyone do that? If you want to represent a quantile, just use a quantile. I mean, pediatricians can do that. "Your kid's size is in the 83rd percentile of their age cohort", not "Your kids size quotient is 115".
The only other group I am aware of which abuse the poor normal distribution similarly are physicists who use erfcinv to convert p-values into sigmas. At least they have the excuse that 5 sigma is a very unwieldy p-value.
Any professor who applied this "transform to gaussian" trick on his students test result would be fired on the spot, hopefully. Why do we let intelligence researchers get away with that?
Instead of arguing about too little or too much HBD, we should point out that it does not matter because all the souls of researchers who do such things as "normalize so that it is Gaussian" belong to the science devil anyhow.
I feel like you're holding back, quiet_NaN. Tell us how you *really* feel. LoL!
But I learned in my Stat 101 course (way back in prehistory when we were drawing Bell Curves on cave walls), that normalizing to a Gaussian distribution is SSOP (Standard Statistical Operating Procedure). Every psychometrician does this, and that's what the folks in this study seem to be doing. When you measure individuals against a large population, that's what you do to map them along the curve of the distribution. My problem with this is that it hides the jagged warts in the original data, and it hides the changes in population performance over time. Thoughts?
It was irresponsible of me to start this thread and then disappear to an offline vacation. I'm back now! And the thing that you remember from your Stat 101 course seems like the same thing I remember from my Stat 101 and Psychometrics courses, so I am quite surprised by the opposition.
As I understand it, the problem is that different variables have different nature. For example, if you want to evaluate statistically what color eyes people have, you could encode the data using numbers, for example "brown = 1, green = 2, blue = 3", but it would be invalid to do any mathematical operations on these numbers (e.g. assuming that green is the average between brown and blue). This is enumeration, without comparison.
One step further is comparison without scale. For example, you could encode the ripeness of a fruit, using "unripe = 1, ripe = 2, rotten = 3". Now we see that in some meaningful sense, the rotten fruit is further along the dimension we care about than the ripe fruit, etc. The ordering is correct. But the exact values 1, 2, 3 are arbitrary; numbering 11, 12, 13, or even 11, 12, 19 would work exactly the same.
And finally there is the type of variable where you have the zero and multiplication, and you can do mathematical operations, such as height and weight, because it makes sense to say things such as "these five people together weigh 421 kg".
.
My understanding is that in the old paradigm "mental age divided by physical age", intelligence was of the third kind. Physical age is a number that can be measured precisely. Mental age... as long as we stay somewhere between "a typical 3 years old" and "a typical 20 years old" is also more or less a precise number. So in this paradigm we can treat intelligence as an exactly measurable value, and discuss whether it fits the bell curve or whether the curve is skewed.
But we get the problem when we move beyond childhood (e.g. a typical 100 years old is probably *less* mentally capable than a typical 30 years old), or beyond the human norm (there is no such X that a typical X years old human is as smart as 30 years old Einstein). So we abandon the old paradigm, and switch to "intelligence as a percentile, controlled by age and whatever else".
And I believe that after this change of definition, intelligence is no longer a value that can be scaled, only compared. (That is, it is less like weight of a fruit, and more like ripeness of a fruit.) And talking about the shape of the intelligence curve no longer makes sense -- this is the kind of value that does not have a shape, only ordering.
Inb4: "but what if the raw scores are skewed?" But we don't care about the raw scores per se; that's precisely why we calibrate the tests so that we can convert the raw scores (regardless of their shape) into IQ points. Whether the raw scores are skewed or not is a fact about the questions in the IQ test, not necessarily a fact about human intelligence itself. A different set of questions would result in a differently skewed raw IQ curve.
.
As a thought experiment, imagine that there are only two intelligent being ever, let's call them Adam and Eve. Suppose that "Adam is smarter than Eve". What does it mean? It means that most problems that Eve can solve, Adam can solve, too; but there are many problems that Adam can solve and Eve cannot. (There are also a few problems that Eve can solve and Adam cannot, but both agree that such problems are rare, and seem often related to Eve's specific talents, rather than being general abstract problems.)
Okay, so Adam is smarter. But is he 2x smarter? Or only 1.1x smarter? What would those things even mean? If we create a test consisting only of easy questions, both Adam and Even will get maximum points, both equally. If we create a test consisting only of those questions that Adam can solve but Eve can not, Adam will score infinitely more than Eve. And a mixture of easy and difficult questions will result in any ratio in between. What is the fair mixture of question, that would result in a fair test? That sounds like a circular question. If God creates an arbitrary scale for intelligence, you can compose the test so that the results will correspond to this scale; but testing only Adam and Eve cannot determine such scale.
And if this is true for 2 humans, it is similarly true for 8 billion humans.
Shame on you, Viliam, for stirring the pot and then running off! :-)
In your absence, I discovered that I had misconceptions about how IQ test results were normalized. Generally, linear normalizations are used (Thank you, Eremolalos, for patiently correcting me). But ChatGPT was perfectly willing to feed me bullshit about how non-linear quantile methods are used to "force" a bell curve. But psychometricians design the tests so the questions vary in difficulty, and, across a random sample of test-takers, the results will tend to fall into a bell curve, from which psychometricians derive a number they call the g factor (expressed as z-scores).
So we're left with the question of what is g in the g factor? Psychometricians claim it's a measure of general intelligence, because the ability to perform well answering one category of questions will tend to perform well on other categories of questions. And g correlates well with standardized test performance.
But if we try to pin it down, g is an abstract concept. Psychometricians assume it's real, but to me they sound like medieval scholastics discussing the soul. IMO, it *does sound like circular reasoning: a person's g is how well they can take the tests we designed to measure g...
...which is why, beyond high school, g seems to have little or no effect on life outcomes.
Wait a minute. Expressing scores as standard deviations from the mean does not turn all results into bell curves. If the raw score results are bimodal they stay that way. If they are skewed to the left or right they stay that way. When the source Beowulf quoted said the raw score was turned into a normal distribution, I think the writer just meant raw scores were turned into z scores. If what you get after doing that is a bell curve, it’s because the raw scores already formed a bell curve.
I think that the point is that the scores on the test are not directly related to IQ. The idea is to shoehorn test scores into IQ. For instance, suppose that the questions increase exponentially in difficulty. Then perhaps each point in IQ equates to one more question solved. So we are linear on IQ to test score but we have decided that IQ is normal and so map that way. Except that we don't have any idea how questions map to IQ. So, instead, we assume a normal distribution and make some guesses about question difficulty and try to fit the scores to the distribution.
I must admit I'm beginning to question what I thought I knew about the normalization of IQs. But my understanding is that by assigning each of the raw scores to a quantile value in the sample, we're talking about mapping them into a Gaussian target distribution (with mean = 100 and SD = 15). And by doing this, it would compensate (hide) any skew or kurtosis in the raw scores. Am I wrong about this? Maybe I am, but I admit I'm too lazy to try to construct a skewed and or kurtosisized dataset to see what Statcrunch will show me after I normalize the data. Uggghhhh.
I wouldn't get too excited about them reproducing Dunning-Kruger ...
https://www.mcgill.ca/oss/article/critical-thinking/dunning-kruger-effect-probably-not-real
If you read their write-up of the D-K, they provide alternative models that cast doubt on its reality.
I have to ask. I took some of their surveys that are supposed to tell you things and they came across as pure voodoo to me. They were asking questions that were leading or ambiguous and then claiming to draw concrete conclusions from them. Are they supposed to be trustworthy?
Well, I think IQ is mostly pseudoscientific bullshit. So, no, I don't think it's trustworthy. I didn't take their test, though.
And I experienced multiple moments of schadenfreude at how many of these common IQ claims showed little or no statistical significance.
All that IQ data and no spicy questions...
I'm skeptical about the way the sample was obtained though, you're preferentially sampling for very online people who are time rich and money poor, or something like that.
They did say that the "non-Positly social media sample had on average substantially higher IQ estimates than Positly sample (IQ = 120.65 vs. IQ = 100.35)."
OTOH, once normalized, they fell into a nice bell curve. Hard to argue that this sample deviates from the general population by more than D = 0.019 and p = 0.53, as they noted...
> The distribution looks pretty bell-curved, i.e. normally distributed. However, to test this formally, we conducted the Kolomogorov-Smirnov test, which is a statistical test that tests whether the distribution statistically significantly deviates from normal. The test was non-significant (D = 0.019, p = 0.53), meaning that the difference between a normal distribution and the actual IQ distribution we measured in our sample is not statistically significant.
Very interesting and impressive stuff! Did they publish any proper papers on this (couldn't easily find that on the page)? If not, why not?
They talk about the possibility of range restriction for the lack of correlation between IQ and college GPA, but it seems plausible that smarter people tend to get into more rigorous colleges and choose more difficult majors. I did well in high school but then managed a 2.5 GPA at a college that I have no idea how I got in to.
Yeah, I suspect there's a huge effect on your school and major. If you test poorly, you probably have lousy SAT/ACT scores and so probably go to a less competitive college, and some majors are famously really hard if you're not pretty smart (math, physics, philosophy, engineering, chemistry, etc.).
A related stat I learned in grad school: grad school grades do not predict any measures of professional success, including number of papers published.
Also, I don't think more prestigious colleges are necessarily harder to get a high GPA at. They may be easiier. (I'm not sure of that though -- just an impression I formed after reading about grade inflation at Harvard. )
I worked hard to achieve a high GPA in high school to gain admission to a good university. Once in college, I slacked off a bit, had a lot of fun, did a lot of drugs (especially psychedelics), but maintained a B+ average. No one asked for my GPA in my job interviews after college. They just wanted a person with a degree. I knew this upfront, so why bother killing myself? Too bad g doesn't measure sensible life goals. I've always been a Heinlein's too-lazy-to-fail sort of guy.
I just took their test, and it produced results in line with what I’ve scored on other tests, including in the distribution of scores for different categories in line with my SAT, LSAT, and my own personal recognition of strengths and weaknesses.
So their test seems to be pretty accurate.
Do you have an explanation that being anti-HBD-IQ isn't circular reasoning where poor outcomes are explained by discrimination and evidence of said discrimination are poor outcomes?
Can you rephrase your question? I’m not understanding what you’re asking.
There's also the kind of psychological sadism Stalin had, where he evidently enjoyed making people very afraid of him
I suppose you could shoehorn that in to #2 but it feels slightly different than the kind of game Internet trolls play
Psychological sadism? Or was it the necessary sociopathic focus to gain and maintain power? Stalin had tremendous self-control. After Lenin died and various factions were fighting over the future of the Soviet Union, Stalin was confronted by an angry Trotskyite officer with a saber. The fellow confronted Stalin in a stairwell and threatened to cut off Stalin's ears. Though visibly angry, Stalin maintained perfect control. He didn't flinch. He didn't say anything. Witnesses say he just stared at the officer while the guy blew off steam. Once Trotsky was expelled from the Party, Stalin was regarded by other party members as the least objectionable choice as their future leader. He was personable, had a self-depreciating sense of humor, and he never shouted or lost his temper. He seemed to be the safe choice. It wasn't until Stalin got full control of the security forces that he systematically purged (liquidated) anyone who could threaten his power. Most of his early supporters ended up being executed. But Stalin was noted for his emotional control. He didn't explode into screaming tirades like Hitler did. He just squashed his enemies like bugs with no emotion. Khrushchev claimed he was dazed and stunned when he learned of the size of Hitler's invasion. Stalin retreated to his dacha and was incommunicado for two days. But I wonder if Khrushchev misread him. I wonder if Stalin needed the quiet to think out his next moves to (a) make sure he wasn't deposed as leader, and (b) create a response to the German attack.
Putin used the same strategy to gain control of the Russian Federation. He was noted for his self-deprecating sense of humor. He inspired trust in the oligarchs and other politicians. But after he solidified his power, the oligarchs and rival politicians started falling out of windows.
I beg to differ. My statement *is* factually true. Some oligarchs may be living happy lives in exile, but many died under mysterious circumstances. I admit I'm too lazy to search through old news stories of oligarchic deaths, so here's what Chatgpt sez...
------------------
Determining the exact number of Russian oligarchs—or elite business figures—who have died under mysterious circumstances since Vladimir Putin assumed power (first as president in 2000) is difficult due to varying definitions of “oligarch,” the opaque nature of many incidents, and limited independent verification. However:
🕵️♂️ Scope and Estimates
During Putin’s tenure (since 2000)
Broad reports note dozens of high‑profile Russians (including businessmen, officials, critics, journalists) who died in unexplained ways—suspected poisonings, falls, plane crashes, and more. For example, around 20 mysterious deaths between 2022–2024 among elites alone were documented
euronews
The Week
nationalsecuritynews.com
The New Voice of Ukraine
.
One analysis cites “some two dozen notable Russians” dying in unusual ways in 2022 alone—a mix of suicides, falls, and other odd circumstances among energy sector elites
The Atlantic
DW
.
Russian oligarch-specific cases
In 2022, at least seven oligarchs died in quick succession, many linked to gas and oil companies—cases like Protosenya, Avayev, Subbotin, Melnikov, Antonov—and often described as murder‑suicides or staged suicides .
Energy executives such as Ravil Maganov (Lukoil chairman) and Andrei Badalov (Transneft vice-president) died from falls in similar circumstances in late 2022 and mid‑2025, respectively
The Kyiv Independent
The Sun
The Independent
.
🔢 Summary Estimate
Timeframe Estimated number of suspicious elite deaths Context
2022 alone ~20–24 people “Sudden Russian death syndrome” among officials, oligarchs, energy execs
Oligarchs only 7+ in 2022 Energy‑sector elites with alleged murder‑suicides
2000–2025 Dozens total Includes critics, officials, oligarchs
🚨 Notable Examples
Yevgeny Prigozhin (Wagner Group head, former ally): died in a plane crash in August 2023 under suspicious circumstances, following his short-lived mutiny against the Kremlin
RadioFreeEurope/RadioLiberty
Wikipedia
The Independent
.
Ravil Maganov (Lukoil chairman): fell from hospital window in 2022—officially suicide, but widely questioned .
Andrei Badalov (Transneft vice-president): died after falling from apartment in Moscow in mid‑July 2025, raising fresh alarms
The Sun
The Independent
.
Good question. Also they included sadism in the Dark Triad, but it's not part of the triad — Dark Triad + Sadism = Dark Tetrad. I'm sure there are plenty of personality tests that measure this stuff, though. (And they're probably as useful as the Myer Briggs or the Enneagram! <snarkasm>)
Digits backwards correlates moderately well with full scale IQ. And I think it makes sense as a measure of one aspect of intelligence -- being able to hold a a number of details at once in your mind so you can extract what conclusions you can from the whole welter. It's not just useful for mental math. It's something you might use, for instance, if solving a puzzle with a several little rules to it -- there are a bunch of cubes each with sides of different colors arranged as follows, and you have stack the cubes in such a way that . . .
There could be a job where you have to engineer a solution to a problem like that. Or a situation involving multiple regulations regarding international trade. Obviously being able to hold a bunch of details in mind at once is only one skill used for tasks like that, but it doesn't seem peripheral or trivial to me.
I am not sure I fully understand your objection. Are you objecting that certain subtests are too correlated with one another, that they are uncorrelated with g, or both? Is this a single group of subtests or multiple groups?
My experience taking an IQ test was during an evaluation for ADHD. In that case, the fact that I scored worse on certain subtests despite their usual correlation with the others was interesting and helpful.
In general my impression of psychometricians is that, whatever their flaws may be, they are unusually willing to be politically incorrect and to upset the academic apple cart, and I respect that.
I have the same impression, and I work with some. Also they tend to be quite smart.
Is psychometrics more-or-less the mathiest area of psychology?
But G is probably not one *thing.* I think it's probably something like, say, athleticism. There are "subtests" for aspects athleticism -- strength, speed, eye-hand coordination, flexibility, speed of learning routines, etc etc. You can test them, and they are probably pretty well correlated, but you can't test athleticism itself. Even if you tried to find the root cause of athleticism you would not find anything a monolithic Something. There would be a genetic component, but then general health would also be a component (if you get the gene but also have bad asthma you're probably not going to the Olympics.) Early training or play that develops things like eye-hand coordination is probably a contributor as well.
Another analogy: Maybe g is like building tallness. NYC has a high BTQ (building tallness quotient). But not all buildings are tall. And you can't find the tallness itself, separate from the buildings.
>You can test them, and they are probably pretty well correlated, but you can't test athleticism itself.
Sure you can. A popular test for low athleticism is : get up from sitting on the ground without using your hands. That seems to be exactly what Tori is asking for: A problem in that particular area can usually be compensated by enough general athleticism.
And g is definitely not like building tallness. You cant linear combine buildings.
Ah, it sounds like you have a disagreement with the principle used to structure the tests. They want a lot of subtests measuring different things, each correlated with g, which requires each of the subtests to be simpler. More complex tests will naturally overlap more -- in addition to being harder to score, as you said.
Without digging out the report, I think I did worse on the backward or distracted versions of some tests than my performance on the forward or undistracted versions would lead you to expect. And there is an infernal digit circling test where I got reasonable accuracy at the cost of being painfully slow; the subjective experience of doing that one was viscerally unpleasant in a way that is difficult to describe.
Got to hand it to you Americans, you certainly do get things done!
First USA pope, and now the Vatican website has been updated!
https://www.vatican.va/content/vatican/en.html
Even more amazing, they seem so far to have English translations of documents uploaded! What sorcery is this, is it licit?
I really like it, even if the old parchment-esque site felt like one of the last vestiges of the old Internet and I am sorry to lose it. Are there web design sedevacantists, arguing that the Vatican hasn't had a legitimate webmaster in twenty years? There ought to be.
I haven't really dug into the site yet, but I hope prompt English translations imply that they also took the time to reorganize the deeper structure of the site. That was pretty badly needed.
Yet the English Translation of the Bible on their site looks like it’s from 1998.
Anything past 1611 is rubbish.
Is there a more up-to-date version of the Bible that they should be translating?
I hear the Book of Mormon is all the rage these days.
That one hasn’t been updated much since the 1830s has it?
That's practically yesterday by Vatican time 😀
The Vatican website has had extensive English translations of documents for well over a decade.
But not everything, it had a habit of giving English-language reports with links and then the linked material was in Italian because pffft, why can't you speak Italian if you're looking up Vatican stuff?
Could anyone give me a realistic path to superintelligence?
I'm a bit of an AI-skeptic, and I would love to have my views contradicted. Here is why I believe superintelligence is still very far away:
To beat humans at most economically useful tasks, an AI would have to either:
1. have seen most economically meaningful problems and their solutions. It would not need a very big interpolation ability in this case, because the resolution of the training data would be good enough.
2. have seen a lot of economically meaningful problems & solutions, and inferred the general rules of the world. Or have been trained on something completely different, and being able to master economically useful jobs because of some emergent properties.
1. is not possible I think, as a lot of economic value (more and more, actually) comes from handling unseen, undocumented and complex tasks.
So, we're left with 2.
Great progress has been made just by trying to predict the next token, as this task is perfect for enabling emergent behavior:
- Simple (you have trillions of low-cost training examples)
- Powerful: a next token predictor having a zero loss on a complex validation text dataset is obviously superintelligent.
Even with a simple Cross-Entropy loss and despite the poor interpolation ability of LLMs, the incredible resolution of the training data allows for impressive real-world results.
Now, it's still economically useless at the moment. The task being automated are mostly useless (I work as a software engineer and I think my job is at best unproductive in the grand scheme of things, and more probably nefarious to economic growth).
Scaling things up doesn't work: GPT-3 -> GPT-4 yielded a great performance leap, but GPT-4 -> GPT 4.5 not so much, despite the compute factor being the same at each point. So scaling laws are worst than logarithmic, which is awful (not just bad).
I can’t think of another powerful but simple task that AI could be trained upon. Writing has been optimized by humans to be the most compressed form of communication. You could train an AI to predict the next frame of a video, but it’s soooo much noisier! And the loss function is a lot more complicated to craft to ellicit intelligent behavior (MSE would obviously suck).
So now, we're back to RL. It kind of works, but I'm surprised by how difficult it seems to implement, even on verifiable problems.
Code either passes tests or not. Still, you have to craft a great advantage function to make the RL process effective. If you don't, you get a gemini 2.5 that spits out comments and try/catch blocks everywhere. It's even less useful than gpt 3.5 for coding.
So, still keeping the focus on code, you, as a human, need to specify what great code is, and implement an advantage function that reflects it. The thing is, you'd need an advantage function more fine grained than what could fit in a deterministic expression.
Basically, you need to do RLHF on code. Which is costly and scales not with compute, but with human time. Because, sure, you can RLHF hard, but if you have only few human-certified examples, you’ll get a RL-ed model that games the reward model.
The thing is, having a great reward model is REALLY HARD for real-world tasks. It’s not something you can get just by scaling compute.
Last year, the best counter-argument to my comment would have been “AI progress is so fast, do you really expect it to slow?”, and it would have been perfect. Now, I don’t think we have got any real progress from GPT-4 on economically valuable tasks, so this argument doesn’t hold.
Another convincing argument is that “we know the compute power of a human brain, and we know that it’s less than the current biggest GPU clusters, so why should we expect human intelligence to remain superior?”. That’s a really good argument, but it fails to account for the incredible amount of compute natural selection has put into designing the optimal reward functions (sentiment, emotions) that shorten the feedback loop of human learning and the sensors that give us data. It’s difficult to quantify precisely but I don’t think the biggest clusters are even close to that. Not that we’re the optimal solution to the intelligence problem, just that we’re still way short of artificial compute to compete against natural selection.
Here’s my take, I’d love to hear contradiction!
I think most of the people who believe in superintelligence believe that it is just the next step after general intelligence. There is supposed to be some sort of general flexibility in reasoning and problem solving that lets you deal with all sorts of problems, not just the ones you’ve been optimized for. If that’s right, then you don’t need to train on everything - you just need to train on enough stuff to get that general intelligence, and then start doing a bit better.
But I’m skeptical that there is any truly general intelligence of this sort - I think there are inevitable tradeoffs between being better at some sorts of problems in some environments, and other problems/environments. (Often enough, I think the tradeoffs will be with the same problems in different environments.)
"There is supposed to be some sort of general flexibility in reasoning and problem solving that lets you deal with all sorts of problems, not just the ones you’ve been optimized for. "
When you say "all sorts of problems," what reference set are you drawing from? Just the sort of problems that some human somewhere on Earth could solve (or at least try to solve)? Or vastly more complex or bizarre problems that we haven't studied and perhaps are unable to study?
If it's the former, it seems hard to credit the idea that a generally flexible intelligence *couldn't* exist. Human brains exist. Different human brains are good at different sets of tasks, but we have sharp physical limitations on their power consumption and size/density. Even with only human brains, you could craft a pretty good facsimile of a generally flexible intelligence in the form of a team of people with different specialties--as long as you didn't mind a bit of extra latency while they figured out who was best-suited to tackle it.
It likewise seems hard to credit that a manufactured intelligence could never be as a single capable as a highly-capable human[1] simply because human brains are made of physics-obeying stuff, doing entirely physics-obeying things and it would be really, really weird if growing a squishy, low-speed meat computer were the only physically allowed way to do all of those things. So it seems like "AI as capable as highly-capable humans" is very likely to be possible.
But once you *had* such an AI, it seems like you could surpass human level capabilities with almost trivial extra effort. Things that seem likely to be easily possible with a constructed intelligence (that aren't easily possible with humans) include:
1. Allocating near-arbitrary amounts of lossless information storage with near-instant retrieval.
2. Increasing the speed at which it "thinks" up to whatever limitations are imposed by your hardware.
3. Enabling it to create multiple instances of itself, to give full focus to many different tasks at once.
4. Networking it together with other intelligence with different capabilities, allowing for team like collaboration with (potentially) much lower-latency and higher-fidelity communication than humans can achieve with speech and writing.
Now, it's quite possible that some of those things would actually be infeasible: the trouble with talking about an unknown technology is that you don't know its limits. But if even some of them were feasible, you'd end up with a human-level intellect with access to some degree of superhuman (perhaps enormously so) capabilities. Different people place different bars for "superintelligence" and so it's possible that even all of that together wouldn't pass some people's bars. But to me, at least, that potential capability set seems pretty damn alarming.
[1] Which is not to say that it will necessarily happen soon: I'm uncertain but leaning slightly pessimistic about the ability of LLM-style architectures to get arbitrarily good at approximating human capabilities. If they can't, the next key breakthrough could well be many years off.
What we've seen with every form of intelligence we've observed, whether it's humans or machines or animals, is that each of them has weaknesses compared to others. Some of them have lots of strengths over others - humans can outperform bobcats at lots and lots of things, even if bobcats outperform us at finding small animals in the woods. And even looking at other humans, we find that the ones that are especially skilled at some things have weird deficits in others, whether it's absent-minded professors, or Elon Musk believing whatever weird conspiracy theory he read on some tweets at 4 am.
If there were such a thing as an intelligence that really could do *all* sorts of problems, which I think the idea of AGI is actually about, it would actually be better than any actually existing intelligence at lots of things - particularly those complex and bizarre problems we haven't studied and may not be able to study.
But I don't think that sort of thing is actually possible. Instead, we'll get machines that are better than humans at more and more things, but there will still be some things they remain weirdly bad at compared to us, just as humans are weirdly bad at finding small animals in the forest compared to bobcats. It may well be that there are several classes of AI, each of which has different sorts of weaknesses compared to humans.
Meant to reply to this earlier, but a detailed reply will take a while. So short answer: I think you're conflating "long-term cognitive capability" and "learned specialization."
For example, you say "bobcats outperform us at finding small animals in the woods." Now, I expect that a bobcat picked at random will outperform Linda the investment banker from Chicago at this task[1]. But I suspect that a round majority of humans could outperform a bobcat here--though how and what you measure makes a big difference--if started learning the relevant skills from an early age. Quite a few could probably re-train into them as an adult in a matter of a few years.
While I don't doubt different people have different innate[2] strengths and weaknesses towards different sorts of cognitive task, in practice we specialize so much--and from such an early age--that it's hard to tell nature from nurture. Regarding AIs: it's certainly possible for different architectures to be more or less well-suited to certain sorts of tasks. But there's a degree of flexibility and extensibility there than changes the game. If a single architecture *can* be good a Tasks A, B and C, but each require different training, well, why not just train it on all three. Even if they actually need different sets of weights and biases[3], you can just train 3 instances and then network them together into a single agent. And you could likely do that almost as well across architectures. To really be confident that humans *always* retain a niche, you'd need to identify things that *no* computer-based algorithm could compete with a human brain at.
[1] Though of course the bobcat is probably absolute rubbish at maintaining a valuable stock portfolio.
[2] Though this is a slippery word: genes are certainly "innate," but a lot of early environmental factors are barely less so. There's a very fuzzy edge around which environmental factors aren't.
[3] or whatever architectural equivalent future AI paradigms will use.
My main disagreement is with the last paragraph. I agree that we don’t have anywhere near enough compute to simulate natural selection and find better reward functions. But I also think that reward functions that result in superintelligence are not too complex. I don’t know how to explain why I believe this, it comes largely from intuition. But I think given the assumption “reward functions for superintelligence are simple”, you can reasonably get that superintelligence will be developed soon, given the hundreds of researchers currently working on the problem.
> Scaling things up doesn't work: GPT-3 -> GPT-4 yielded a great performance leap, but GPT-4 -> GPT 4.5 not so much, despite the compute factor being the same at each point. So scaling laws are worst than logarithmic, which is awful
I'm not sure you can back this up. If doubling the compute doesn't double the performance, that's worse than linear. You're trying to show each doubling in compute doesn't even give the same constant increase on some metric of performance, and that metric would have to be linear with respect to the outcome you're trying to measure. I'm not sure we have such a metric, and some metrics, like AI vs human task duration, appear to be increasing exponentially.
True, thanks for pointing this out.
Or maybe we just have a logarithmic utility function vs. objective LLM performance (if we can measure that, which is the exact point you're debating).
True also for AI vs Human task duration, but that's only true for code if I'm not mistaken.
> I work as a software engineer and I think my job is at best unproductive in the grand scheme of things, and more probably nefarious to economic growth
Why do you think that?
Well, I feel that most of the software I've developed (mainly ML models and ERP software) has been used to help with problems whose solutions were human.
2 examples:
- Some features of the ERP software I've helped develop were related to rights management, and paperwork assistance. For the first feature, the real consequence is that you keep an employee out of a some part of the business, effectively telling him "stay in your row", which is not good for personal engagement. The second is more pervasive: when you help people generate more reports, you are basically allowing middle managers and law makers to ask for more of them. So you end up with incredibly long contracts, tedious forms and so on. Contracts were shorter when people had to type them and copy-pasting didn't exist.
- I've developed a complex ML model for estimating data that people could just have asked to other people. When I discovered that, I told the customer: "you know, you could just ask these guys, they have the real numbers". But I guess they won't, because they now have a good enough estimate: net loss.
Now, of course, I've developed useful things, but I just can't think of any right now ^^
All that software has to do to contribute to economic growth is be sold (or generate income if not directly sold - ie from Ads).
Looks like AI had zero influence on employment: https://x.com/StefanFSchubert/status/1948339297980936624
I would be careful to read too much into that graph without doing some more careful statistical analyses. There’s a plausible enough picture in which the left 60% of the graph should see zero effect, and the right 40% should see a roughly linear effect, and if I squint it actually looks compatible with that. But also, 2.5 years is just a really short time frame, and there have been some much bigger short term effects in some industries with the presidential transition.
That's not consistent with the recent rise in unemployment for CS grads. I've heard too much anecdotal data to suggest that it's not related to AI. I wouldn't expect AI to have impacted other industries yet. It's too new. Only software companies are agile and tech-savvy enough to adjust to new technology so quickly.
Cost-saving innovations tend to roll out during recessions. I expect AI to really surface during the next one.
It's perfectly consistent, there's even too much software out there so there are hiring freezes. And interest rates are still much higher than in pre-covid era. We haven't seen slowdown of employment in professions that according to economists are most susceptible to AI-induced job loss, but we have seen slowdown of employment in professions most susceptible to economic downturns. The slowdown is not only in software, but in real engineering too - perfectly conssitent with firms cutting R&D budgets.
Part of being "agile" is trying risky new technologies to see what works.
My wife and I are considering making a large change: we both grew up and live in the Mountain West, got married, and had children, who are now on the verge on making the transition to junior high school\middle school. We like where we live now but don't *love* it, and don't have extensive social ties here we'd be sad to leave.
My parents, and sister and her family, live on the East Coast, in the place we would normally not consider moving to, but as time passes, we've come to appreciate how much we've missed being so far from family, and are considering relocating to be closer to them. My parents are in general good health, so barring unforeseen events we expect to have years of quality time to spend.
What are the main concerns I should think through, aside from the usual cost of living and school quality issues?
Job and educational opportunities, but most places on the East Coast have those too.
I’ll just say, do it while the kids are young if you do it, moving with a teenager is very hard.
One thing you may not have considered is the humidity. I live in the DC area, and my wife (who grew up in Utah) still finds the humidity here during the summer terrible after 20 years. We have two dehumidifiers running in our house!
ETA: Visit wherever you're considering moving in the summer--now through mid-to-late August, say. That will give you more of a taste of what you're getting into.
Oh for sure, we have visited the area in both the dead of winter and the height of humid summer. My hope would be that the proximity to both family and cultural amenities would override the discomfort of the weather
After having grown up in the mountain west, moving to the east coast for 12 years, and having moved back to the mountain west…
Summers are painful when you have to tolerate them year after year on the east coast. The humidity and banality of the weather sucks. No more 40 degree temp swings between days and nights, or between days. No more snow, and when it does snow it’s an apocalypse.
Same with traffic when you have to tolerate it every single day. There are people everywhere on the east coast… it’s impossible to escape.
You’ll miss open landscapes. I’m convinced being used to seeing a big sky and far-reaching distances, then suddenly not, is akin to seasonal affective disorder. It does make trips back out west magical though.
If outdoor recreation is your thing, it’s worse on the east coast. It can still be done, but it’s less beautiful, less available, and more crowded.
If you have 100 kids, on average they will likely grow up with less “masculine” traits on the east coast. This has both good and bad attached to it; just beware. The cultures are indeed different.
Overall there are plenty of goods and bads… I moved back to the mountain west for my community and the views. If those weren’t important to me (or if I had community elsewhere) I may not have made the move back. Yet still sometimes I’m struck by the annoying aspects of hyper-masculine culture here (exaggerated because I do blue-collar work), just as I was struck on the east coast by the annoying aspects of hyper-feminine culture.
One last note… when I was in 7th grade my parents almost moved us to another state. I was onboard with the plan, but it ended up not happening. That move *not happening* was one of the luckiest moments of my life—unbeknownst to me at the time—because having grown up in one area my entire adolescence gave me friends and a community that will be with me forever. I have a true “home” moreso than my parents ever did.
I always felt the huge landscapes in the West helped (me, at least) keep things in perspective. Everything human scale is dwarfed by the surroundings. My parents moved us to Utah in 7th grade and it turned out to be the best thing that happened to me.
I work from home so commuting wouldn't be a major daily annoyance, and I don't have a strong community network here in CO where I currently live, but also don't really anticipate building a strong community in the NE outside of my family. Interesting point about the masculine\feminine culture; I get somewhat tired of the blue collar masculine cultural aesthetic mainly because I'm not part of that demographic, but I'm also tired of the people who make "being outdoorsy" their entire personality. Anyway, lot to think about; I appreciate the discussion
Which part of the East Coast? Massachusetts is very different from Maryland.
I also grew up in the Mountain West and lived in the East Coast for a time as a child. Overall the mountains offer a better quality of life: they're less crowded, cheaper, generally cleaner, and in every way healthier.
The biggest advantage of East Coast life is proximity to America's great cultural institutions. If you live in the NE megalopolis, you are more plugged in to world culture than the great majority of humans. Since it's more densely populated you also benefit more from network effects. Your family is even an example of this.
As with so many things in life it comes down to values. I'd say if you care more about people, move to the coast. If you care more about nature or lifestyle, stay in the Mountain West.
I didn't want to get too specific, in part not to bias responses too much, but the locations we're talking about definitely matter. We're from Salt Lake City (not Mormon), and now live outside Boulder, CO. My parents and sister's family are in suburban New Jersey outside Philadelphia. We love the cultural access in the NE, but the crowding and humidity are the big turn-offs for me. We spent 5 years in Austin so we're familiar with scorching heat\humidity and don't enjoy it. If there were any way to arrange matters such that we all lived in the West that would be ideal but that's not a viable option.
We visit Utah more-or-less every year, and one thing that's striking is how different the assumptions wrt family size are. Our family of five is a little too big for a lot of stuff elsewhere, and especially in the DC area--they can accomodate you but you're a bit of an exception. In Utah, we're a small family.
Totally understand this point; Utah is a great place for large families. Somewhat relatedly, the area we live in now is mostly upper-middle class striver types with both parents working; in our family I'm the sole breadwinner and my wife stays home and we can definitely sense the mixture of resentment and disdain from some people here. Being in an area with a larger diversity of acceptable life choices would be refreshing
Why would you not normally consider moving there?
It's a part of the country with a different culture, climate, and geography than I'm used to. I've enjoyed my many visits there, and within two or three hours drive there is a large array of things to do and places to see, but the place we'd be moving is itself not a big draw.
I'm pretty far behind you as my wife and I just had our first child in January, so while I can't answer your question, I can say that even these first six months (and the year and a half of marriage before having a kid) have been a time of rich fullness just due to the fact that my wife's family and my parents all live close by. Our location doesn't account for much of that, as I live smack in the middle of North Dakota.
I'm sure that we would still be very much enjoying life together even if none of our family were close by, but having family around definitely adds an extra depth and richness that I feel would make a move like you're describing worth it.
I'm not even so sure about adding depth and richness, but it sure would be nice to have free babysitting.
OP's kids are a bit older, but I do regret having gone through the small kids phase with no family nearby; it takes a huge amount of pressure off when there's someone who can watch the kids sometimes and let both parents have a little break.
Thanks, yes, the babysitting (or lack thereof) in the kids' early years was incredibly draining
Very true, it's been insanely helpful.
Thanks for replying. For me this is a choice between great climate and access to great natural beauty, or closeness to family and the ability to share our life in a more casual, regular way than guesting\hosting family for a week or more in their\your house. For years the choice was obvious.
Been having fun a lot of fun working with ChatGPT on alternate history scenario where the transistor was never invented- somehow, silicon (and germanium etc.) just doesn't transmit signals in this alternate timeline. It seems like humanity would have invented vacuum microelectronics instead? Maybe did more advanced work with memristors too? It would certainly be a different world- electronics would be incredibly difficult to miniaturize without the transistor, so you might have large centralized computers for the military & big corporations- but definitely no smartphones. If we had home computers they'd be much more expensive, so even if the Internet existed by 2025 it'd be much much smaller.
Without electronic capital markets you'd have a radically different 20th century- slower growth, more stable, less capital flows in and out of countries. This might've slowed China's growth, specifically- no ecommerce, less investment flows into China originally, no chance for them to hack & steal Western technology. Also a decent chance that the USSR's collapse might not have been as dramatic- they might've lost the Baltics and eastern Europe, but kept going otherwise. The US would probably be poorer without Silicon Valley, plus Wall Street would be smaller without electronic markets. Japan might really excel at the kind of precision mechanics & analog systems that dominate this world. So it'd be a more multipolar world overall.
(I searched the alternatehistory forums to see if anyone else had ever worked on this scenario, but found surprisingly little)
Sounds like a fun project. If you haven't seen it already, you may enjoy some of the summaries of pre-IC space station proposals here: https://projectrho.com/public_html/rocket/spacestations.php#atlasstation . No ICs *might* mean a much larger human presence in space. That plus an intact USSR could be very interesting.
Copying an AI summary from a query "cold field emission microelectronic vacuum tubes"
>Cold-field emission microelectronic vacuum tubes, or vacuum microelectronics, utilize the mechanism of electron emission into a vacuum from sharp, gated or ungated conductive or semiconductive structures, avoiding the need for thermionic cathodes that require heat. This technology aims to overcome the bulkiness of traditional vacuum tubes by fabricating micro-scale devices and offers potential applications in areas such as flat panel displays, high-frequency power sources, high-speed logic circuits, and sensors, especially in harsh environments where conventional electronics might fail
Admittedly these are still higher voltage and less dense devices than semiconductor FETs, but electronics would not have been limited to hot cathode bulky tubes even if silicon transistors never existed.
> (I searched the alternatehistory forums to see if anyone else had ever worked on this scenario, but found surprisingly little)
Really? This is basically the premise of the video game Fallout.
"It would certainly be a different world- electronics would be incredibly difficult to miniaturize without the transistor, so you might have large centralized computers for the military & big corporations- but definitely no smartphones. If we had home computers they'd be much more expensive, so even if the Internet existed by 2025 it'd be much much smaller."
Don't forget that you can probably have fairly large computer memories (in the context of vacuum tubes ...) because of core memory:
https://en.wikipedia.org/wiki/Magnetic-core_memory
PDP-11s shipped with core memory and you can do QUITE A LOT with 1 MB (or less).
And you don't need transistors for hard drives, either :-)
Imagine "programs" being distributed on (error correcting encoded) microfiche.
Sounds like fun in a steam-punk way.
Also, you can easily imagine a slow internet. Think something like 1200 baud (or faster) between major centers (so very much like early Usenet). You won't spend resources for images or pretty formatting, but moving high value *data* should work.
https://en.wikipedia.org/wiki/Computer_network
Just imagine the electrical and cooling requirements for a GPT running on vacuum tubes :)
About the time transistors were becoming widely used, micro-vacuum tubes were also in use. I don't know what their life was, and clearly transistors were found superior, but they were competitive in some applications.
So, yes, vacuum micro-electronics would have been developed. I've got doubts that memristors would have shown up any more quickly than they did here.
It's not clear that vacuum electronics couldn't have been developed to the same degree of integeration that transistors were, so I'm not sure the rest of your caveats hold up. They might. I know that vacuum electronics were more highly resistant to damage from radiation, so there might well have been a different path of development, but I see no reason to assume that personal computers, smart phones, routers, etc. wouldn't have been developed, though they might have been delayed a few years. (That we haven't developed the technology doesn't imply that it couldn't have been developed.)
Agreed! I hadn't seen your comment in time, and replied with essentially the same point.
It's also possible to miniaturize electromechanical switching to IC scale with MEMS and NEMS relays. It's a lot slower than transistors, which is why it's only used for specialty applications, but it's possible.
This is so interesting
My husband was forced untimely - after being rear-ended by someone who spoke no English, had no proof of insurance on him, said he had insurance but didn’t know the name of the company, before driving away (the babies were crying, the side of a freeway is no place for a half dozen children)- and miraculously did have it (one time that the state doing its seeing-like-a-state thing was helpful) - into a quick round of unsatisfactory car shopping after All-State took its sweet time deciding to total his perfectly driveable old Subaru.
As a result - life having other distractions, and he having little interest in modern cars - he got steered into buying his first “new” car.
That’s something that won’t ever happen again!
All those new features he didn’t want to pay for … and Subaru doesn’t need to haggle, period.
He was set to get two requests, an ignition key and a manual, slam-shut gate, swapped in from a dealer in another city - but in the event, a buyer there was simultaneously grabbing that one, so the one they brought in was sadly keyless.
We should have just returned home (a hour plus away) but a certain amount of time had been invested, and a planned road trip was upcoming.
Question: should I get him one of those faraday cage thingies? It has been established that he won’t stuff the fob in foil every night, nor remember to disable it.
He didn’t even know about this car-stealing method, not being much online and certainly not on NextDoor.
There is no consensus on the internet about the need for this. Possibly already passe, superseded by new methods of thievery.
We live in a city that had 18,000 cars stolen last year. Not generally Subarus probably … but anyway. The car is within 50 or sixty feet of the fob, in an apartment parking lot, not within view.
Our cars, when we’ve occasionally, inadvertently left them unlocked (long habit from where we lived previously) have reliably been rifled through, though it was a wash: we had neither guns nor electronics nor drugs. Once, memorably, they stole his car manual. I recall thinking that they’d better come by around daylight savings time and change the clock for him.
A couple of strategies I use with my vulnerable Hyundai in a large city with many many many stolen cars:
1. Make the interior of your car look like a poor, lazy, low-class, possible drug addict is living in it. Leave an empty fast food cup or two in the cup holder, and some random receipts and leaves and other trash tossed around. The cabin visible through the windows should look unpleasantly chaotic.
The goal here is to make it look like it's *completely impossible* that there is *anything* in the car worth breaking a window for; you want your car to look like the person driving it couldn't possibly have any loose change or small bills in a compartment, because they would have already spent them, and no way are there any nice sunglasses or first aid kits or emergency cash or snow chains or changes of clothes or any of the kind of useful stuff I keep neatly organized in my trunk.
2. Consider installing an ignition kill switch. My car's dashboard lights will come on when I insert my key, but my engine won't turn over unless I engage the hidden button which my friend installed for me. While a smart thief could probably find said hidden button if they checked around a bit, I'm fairly confident it wouldn't occur to them to check around a bit, as a) ignition switches are incredibly rare and b) my car looks like it belongs to a drug ghoul who wouldn't know about ignition kill switches.
Thank you! This is not a problem for my car, which doesn’t have tinted windows and can easily be viewed from outside. I just don’t leave anything in it. It has one cool feature too. It’s so old that the interior lever to pop open in the trunk doesn’t work. The trunk is truly a lock box which is handy if you’re hiking or floating a river or something. I take my key. People can leave their stuff in my trunk. There’s no way to get in.
In general, my practice in my last city was to leave the car unlocked rather than have the window smashed unnecessarily.
Years ago, that city had so little crime that we would say if you left your car unlocked somebody might leave you a present in it.
Now I don’t like to have the stuff in the glove box thrown about. Makes you feel kind of bad so I lock it.
We shall see what happens with my spouse’s car. It’s in its beautiful new condition. In fact, we took a road trip and after we got back, I spent half a day restoring it to new car condition.
Usually, I’d be looking for the first dent to happen with relief, but in this case I want to keep it nice; I feel like it’s not a car he’ll want to keep forever.
I’ve never heard of this sort of faraday cage thing. How many cars have been stolen from the apartment parking lot in the last few years? Does insurance cover such thefts? My guess is that a precaution against this one method of theft isn’t that likely to make a big difference, particularly since theft is not that common anyway (apart from the weird Kia/Hyundai exploit that was discovered during the pandemic), but if the faraday cage is cheap and convenient and easy to set up in the tray where you put keys and wallet when you get home anyway (or however you do it), it could still be net worth it.
It was often mentioned in my former neighborhood where a car was stolen or at least ransacked virtually every night. These were much nicer cars and trucks than we then owned. It was a little mysterious. I just couldn’t picture all these vehicles being silently hotwired, and nobody having the slightest knowledge of it inside the house. That’s when people started talking about this other business with the key. A few admitted that they had left their key in the car or rather the fob.
It was a house and I parked my car in the garage anyway but most people used their garage for storage. My husband’s old Honda was in the driveway, but was not very attractive.
I just looked on a forum for people who are crazy about the make of car we just bought and the subject really didn’t come up.
Two cars were legit stolen from our complex in the last couple years, while another one was TikTok challenged and driven away to a shopping center nearby. I don’t know about this surrounding neighborhood because I didn’t get on next-door after we moved here.
The latter Kia or Hyundai was destroyed, however, between the broken windows and the damage to the steering column. It was my neighbor’s old vehicle and he was super sad though he bought a much nicer car.
I thought they said to keep the keys *inside*, but right next to the door, so that *if* someone breaks into your house, they don’t go ransacking the house and possibly turning violent. But a lot of performatively anti-woke people happily misinterpreted that as though they were saying to leave keys *outside* by the door.
Wait, what's the car-stealing method?
Supposedly using a device to capture the signal pinging between the key fob and the vehicle. How you would start the vehicle thereafter away from the fob I don't know. Or maybe just as a means to open the vehicle and throw stuff from the glove box around.
I really thought this was a thing as it was so commonly referenced, but now I'm not sure if it was imaginary/dreamed up by people who didn't want to admit they left their fob sitting in the car.
A relay attack lets a thief extend the range of the key fob by retransmitting the signals, allowing them to start the car. It doesn't let them clone key fob. Once started, cars will not automatically shut off when the key goes out of range. Some cars have protection against relay attacks, but I think most do not. The thief would have to get close enough to the key fob to pick up the signal, and they need the key signal in real time. They can't record the signal and replay it later.
Yes, that’s what I meant. Didn’t mean they would randomly capture signals and store them for later use.
I never had a reason to think about it before. My own car is a very basic car from 2009.
I had just absorbed by osmosis this idea about newer cars.
But upon researching it, I couldn’t find that after all people seem particularly worried about it. Or any agreement about what’s going on with the key, whether it’s really talking to the car or sitting there inert.
Not sure if the subject is just really well understood only by those who steal cars and those who know a lot about electronics.
This is a very old attack in the cryotographic literature. IIRC, it was originally called the mafia fraud attack. Though really its kind of just an instance of a man in the middle attack.
A rather stupid syllogism, but I don’t own such a box. That’s what I’m trying to learn - if it’s worth buying one. For some reason I thought this would be an easy layup for this crowd.
Something interesting I learned today*: Among professional historians, antiquarians and the like there is a widespread consensus that Jesus of Nazareth was a real, historical person. Important disclaimer, this distinguishes the historical personage from any supernatural capabilities he may or may not have had.
They cite about half-a-dozen non-biblical references by Tacitus, Josephus, Pliny the Younger, Suetonius, Mara Bar-Serapion, Lucian and Talmudic references. Most of these are pretty brief or oblique but they converge on a pretty recognizable figure. The evidence is a lot stronger than he was a mythical creation, which is why mainstream scholars of all stripes have landed there.
The other interesting thing about this, the scholarly consensus is a lot stronger than public perception that Jesus was a historical person, and to be sure I include myself in that number (or would have last weak at least): ~76% of Americans across all religious and political affiliations believe he existed: https://www.ipsos.com/sites/default/files/ct/news/documents/2022-03/Topline%20-%20Episcopal%20Church%20Final%202.17.22%20CLEAN.pdf (question 10)
ChatGPT summary: https://chatgpt.com/share/68877913-12f8-8011-b978-ba1c0006a45b
*Several days ago but was waiting for a new OT
> The other interesting thing about this, the scholarly consensus is a lot stronger than public perception
This seems unsurprising. If you asked scholars and Americans if Sudan existed, I am sure you would get similar answers.
Also, "there existed a historical person who resembles the person in the tale" is an excessively low bar to clear. Siddhartha Gautama likely existed. Jesus likely existed. Mohammed very likely existed. Gilgamesh likely existed. Alexander very likely existed. The Iliad could well be based on a historical conflict. Moses could well have existed.
The nicest argument I have heard for Jesus's historical existence is from the lack of denial: critics of Christianity in the early centuries AD made lots of attacks of almost every kind, but they apparently didn't claim that there never was any such person as Jesus. The point being that if there had been any doubt, they would have jumped on that attack for sure.
A year or two ago, I watched an extended interview with Richard Carrier, who's one of the highest profile people arguing against the historicity of Jesus. He's a classical historian by training and a pop historian and Atheism advocate by vocation. IIRC, his thesis is that Christianity started among ethnic Jews living in the Roman world and followed what was then a fairly common template of venerating a purely spiritual messianic figure, and a bit later St. Paul and the writers of the Gospels reinterpreted some allegorical stories about this messiah as referring to an actual historical person who had lived and preached a few decades earlier.
Carrier made some interesting arguments about the mythological pattern which I lack the expertise to assess in detail. Where I do think he rather badly misstepped was in making a big deal out of the Gospels and Epistles being written in Greek rather than Aramaic. I don't that needs much explaining given how few classical documents have survived to the present. Greek was a major literary language throughout the region while Aramaic was not, and Christianity caught on much, much more in Greek and Latin-speaking areas than in Aramaic-speaking areas, so only Greek foundational texts surviving isn't particularly surprising. The wikipedia article for "ancient text corpora" cites estimates for Carsten Peust (2000) that our text corpus from prior to 300 AD is 57 million words of Greek, 10 million words of Latin, and 100,000 words of Aramaic.
Where did you get the idea that Aramaic wasn't a significant language of the region at the time? It was the lingua franca from the Levant to Persia for centuries.
The Talmud alone is in the ballpark of 2.5 million words, most of it two dialects of Aramaic and most of the rest in Hebrew. While it was compiled later than 300 AD, it contained a body of work stretching over many centuries, stretching back well into the Second Temple period.
The Mishnah, compiled centuries earlier, was primarily Hebrew but with some Aramaic.
And that wikipedia page lists 300,000 words for Hebrew - the Tanakh has over 300k words, the Torah 80k of them.
The Dead Sea Scrolls, which are only partially the Torah, contains fragments of nearly 1k manuscripts. https://www.imj.org.il/en/wings/shrine-book/dead-sea-scrolls.
All that is to say, even if we really do have fewer surviving words of Aramaic than Greek, that almost certainly has more to do with our sample than the ancient source.
I was only counting Aramaic, not Hebrew, and trying to time box it to Classical Antiquity. That wikipedia page lists 100k words for Aramaic and 300k for Hebrew. But even if you want to count both together on the grounds that they're closely related languages and also extend to time window to include the Talmud, that's still a lot smaller than the corpuses for Latin or Greek. That said, I am prepared to be told that the wikipedia page is wrong (either misinterpreting the source or relying on a bad source) and would be grateful if you could point me at a better one.
My impression was that Aramaic was a significant spoken language in the Middle East, but much less significant as a literary and documentary language in the broader Mediterranean world than Greek or Latin. So someone writing for an audience in the Levant might well choose to write in Aramaic, but someone trying to write for an audience throughout the Roman Empire would probably do so in Greek or Latin depending on their own fluency, the type of document they were writing, and what parts of the Roman Empire they were most focused on.
I'm also pretty sure there was a much better infrastructure for copying and preserving Greek and Latin works form Classical Antiquity through through the Middle Ages than there was for Aramaic or Hebrew, especially when it came to Christian religious texts. The people making and keeping copies of Greek and Latin documents in the middle ages were mostly Christian or Muslim, while the ones doing so for Hebrew and Aramaic were mostly Jewish. There were a lot more of the former than the latter, giving Greek and Latin documents a better chance at surviving in general. And Jewish scribes would be a lot less likely to be interested in preserving Christian gospels than Christian scribes would be, with Muslim scribes probably somewhere in between. Taken together, if there were Aramaic or Hebrew gospels, it isn't surprising at all that they weren't preserved to the present.
I think your last paragraph is crucial. Our modern sample tells us a lot more about the process of transmitting and recovering that history than it does about the original written corpus.
And given the scale of the numbers they cite from Hebrew and Aramaic, our estimates are always just one or two Dead Sea Scroll or Cairo Geniza type finds away from being totally obsolete.
I don't have a better source than the referenced page from wikipedia - just reasons to believe that its numbers represent a significant underestimate. And that, therefore, we shouldn't confidently draw conclusions yet from comparing the numbers.
That seems reasonable. Thank you for the notes of caution about the numbers.
> and a bit later St. Paul and the writers of the Gospels reinterpreted some allegorical stories about this messiah as referring to an actual historical person who had lived and preached a few decades earlier
That doesn't sound like he's arguing against the historicity of Jesus at all then, if he's saying that Jesus is based on an actual historical person. That just sounds like the mainstream view all over again -- Jesus was real, some of the stories told about him are false, and we can quibble about exactly how much was real.
Carrier is loudly and explicitly claiming that there was no actual historical person who lived in Judea c. 30 AD matching the description of Jesus of Nazareth, and that pre-Pauline proto-Christians would have agreed with this as they would have believed in a purely spiritual Christ and told allegorical stories about him set in a spiritual real. Per Carrier, the claim that Jesus was a human who ministered in Judea was an invention of Paul and the Gospel writers who re-wrote the existing stories *as if* Jesus were a real person who had been physically present in and around Jerusalem.
Right, I think I misunderstood the sentence I quoted, I thought he was saying that they'd merged their spiritual messiah with stories about some actual bloke.
I can see how I wrote it could be read that way, sorry.
Greek was the lingua Franca at the time, and it was what educated people largely wrote in. Particularly on the east. Marcus Aurelius even wrote his Meditations entirely in Greek.
In no way would the writers of the gospels write in Aramaic. John and Luke may not have even spoken it.
Exactly. If there was an Aramaic proto-gospel, it would have had to have been very early and very niche and it probably would have been oral rather than written. Anyone writing in the Eastern Mediterranean for a broader audience would have done so in Greek.
Oh, Carrier is the guy that Tim O'Neill has the beef with. Doesn't think much of Dr. Carrier's arguments 😁
I'm Irish Catholic so you know which side of the fence I'm coming down on here, but I do have to admit to a bias towards the Australian guy of Irish Catholic heritage as well! I can't say it's edifying, but it's fun:
https://historyforatheists.com/jesus-mythicism/
Here's the Carrier one (of several):
https://historyforatheists.com/2016/07/richard-carrier-is-displeased/
"It seems I’ve done something to upset Richard Carrier. Or rather, I’ve done something to get him to turn his nasal snark on me on behalf of his latest fawning minion. For those who aren’t aware of him, Richard Carrier is a New Atheist blogger who has a post-graduate degree in history from Columbia and who, once upon a time, had a decent chance at an academic career. Unfortunately he blew it by wasting his time being a dilettante who self-published New Atheist anti-Christian polemic and dabbled in fields well outside his own; which meant he never built up the kind of publishing record essential for securing a recent doctorate graduate a university job. Now that even he recognises that his academic career crashed and burned before it got off the ground, he styles himself as an “independent scholar”, probably because that sounds a lot better than “perpetually unemployed blogger”."
And then he really gets stuck in 😀
Yeah, my impression of Carrier is that he seems clever and interesting, but the actual substance of his arguments seems pretty weak even aside from my priora about who's likely to be right when a lone "independent scholar" is arguing that the prevailing view of academic experts is trivially and obviously false on a subject within their field.
I'll check out the O'Neill article, thank you.
O'Neill is fun and I trust him because although he's an atheist himself, he gets so pissed-off by historical errors being perpetuated by online atheists and the mainstream that he goes after them.
He does have a personal grudge going with Carrier, so bear that in mind. Aron Ra is another one of the Mythicists with whom O'Neill tilts at times, but not as bitterly as with Carrier.
I was amused by the reference to Bayes' Theorem (seeing as how that's one of the foundations of Rationalism) in the mention of Carrier's book published in 2014:
"Two years ago Carrier brought out what he felt was going to be a game-changer in the fringe side-issue debate about whether a historical Jesus existed at all. His book, On the Historicity of Jesus: Why We Might Have Reason for Doubt (Sheffield-Phoenix, 2014), was the first peer-reviewed (well, kind of) monograph that argued against a historical Jesus in about a century and Carrier’s New Atheist fans expected it to have a shattering impact on the field. It didn’t. Apart from some detailed debunking of his dubious use of Bayes’ Theorem to try to assess historical claims, the book has gone unnoticed and basically sunk without trace. It has been cited by no-one and has so far attracted just one lonely academic review, which is actually a feeble puff piece by the fawning minion mentioned above. The book is a total clunker."
O'Neill's quote from Carrier proudly displayed on his website:
"“Tim O’Neill is a known liar …. an asscrank …. a hack …. a tinfoil hatter …. stupid …. a crypto-Christian, posing as an atheist …. a pseudo-atheist shill for Christian triumphalism [and] delusionally insane.” – Dr. Richard Carrier PhD, unemployed blogger"
Deep calls to deep, and so does Irish invective between the sea-divided Gael so that's probably why I like O'Neill so much even apart from his good faith in historical arguments.
Academics don't view denial of Jesus' existence as much of an argument. Most call it "fringe."
If you're interested in going deeper, I would recommend looking into the modern quests for the historical Jesus, which not only surfaced and studied extrabiblical sources on Jesus, but also developed methodologies for evaluating the gospels:
https://en.wikipedia.org/wiki/Quest_for_the_historical_Jesus
Academics I've read and listened to lean toward the conclusion that only two events in the gospels about Jesus' life are reliable: His baptism by John the Baptist, and his execution by the Romans. (These both rely on the criteria of embarrassment, that is, because these events undermine his followers' beliefs, for them to include these events in the gospels suggests they actually occurred.) Everything else in the gospels about Jesus' life is up for debate, although (as others have said) most academics discard the miracle-working, or offer less supernatural explanations.
The quests for the historical Jesus also bleed into modern understandings of how the gospels were authored, such as the dominant theory of Markan priority, and the theoretical Q document.
"Everything else in the gospels about Jesus' life is up for debate, although (as others have said) most academics discard the miracle-working, or offer less supernatural explanations."
This is true, but in the context of discussing a New Athiest figure it's worth adding some context. For most of these scholars, rejection of the supernatural is a premise rather then a conclusion. It's often the case that an academic will write, "Since its miracle stories are false, this document must be late," only for his reader to say, "Since this document is late, its miracle stories must be faIse," without realizing the circularity.
C. S. Lewis wrote on this very thing in the introduction to his book "Miracles":
"Many people think one can decide whether a miracle occurred in the past by examining the evidence ‘according to the ordinary rules of historical enquiry’. But the ordinary rules cannot be worked until we have decided whether miracles are possible, and if so, how probable they are. For if they are impossible, then no amount of historical evidence will convince us. If they are possible but immensely improbable, then only mathematically demonstrative evidence will convince us: and since history never provides that degree of evidence for any event, history can never convince us that a miracle occurred. If, on the other hand, miracles are not intrinsically improbable, then the existing evidence will be sufficient to convince us that quite a number of miracles have occurred. The result of our historical enquiries thus depends on the philosophical views which we have been holding before we even began to look at the evidence. The philosophical question must therefore come first.
"Here is an example of the sort of thing that happens if we omit the preliminary philosophical task, and rush on to the historical. In a popular commentary on the Bible you will find a discussion of the date at which the Fourth Gospel was written. The author says it must have been written after the execution of St. Peter, because, in the Fourth Gospel, Christ is represented as predicting the execution of St. Peter. ‘A book’, thinks the author, ‘cannot be written before events which it refers to’. Of course it cannot—unless real predictions ever occur. If they do, then this argument for the date is in ruins. And the author has not discussed at all whether real predictions are possible. He takes it for granted (perhaps unconsciously) that they are not. Perhaps he is right: but if he is, he has not discovered this principle by historical inquiry. He has brought his disbelief in predictions to his historical work, so to speak, ready made. Unless he had done so his historical conclusion about the date of the Fourth Gospel could not have been reached at all. His work is therefore quite useless to a person who wants to know whether predictions occur. The author gets to work only after he has already answered that question in the negative, and on grounds which he never communicates to us.""
Even if I were a theist, I would be doubtful about miracles. From what we know of the observable universe, which is vast beyond comprehension, it seems that whoever created it is really big into the laws of physics. Breaking the laws of physics to help a few people in ancient Judea seems really out of character.
And even if she did, she would have taken great care that the miracles are entirely deniable in our age. Why have Jesus walk over water and heal the sick when he could just have placed a cubic kilometer of titanium monument near Jerusalem which is inscribed with the correct faith and will heal all illnesses in any believer who touches it? For some weird reason God was fine with Jesus converting his followers through his wizard powers but really prefers later humans to find faith without the benefit of to just being able to update on statistically significant miraculous results. Sounds fishy.
If a historian finds a document which uses Latin phrases which will not appear for another few centuries after the document claims to have been written, he will conclude that the document is a forgery, and not consider the possibility that it might have been written by a time traveler, even though that would explain the evidence equally well.
>Breaking the laws of physics to help a few people in ancient Judea seems really out of character.
Lewis spent a whole chapter addressing this critique in the book (Chapter 12, The Propriety of Miracles). Here's some excerpts:
"If the ultimate Fact is not an abstraction but the living God, opaque by the very fullness of His blinding actuality, then He might do things. He might work miracles. But would He? Many people of sincere piety feel that He would not. They think it unworthy of Him. It is petty and capricious tyrants who break their own laws: good and wise kings obey them. Only an incompetent workman will produce work which needs to be interfered with. And people who think in this way are not satisfied by the assurance given them in Chapter VIII that miracles do not, in fact, break the laws of Nature. That may be undeniable. But it will still be felt (and justly) that miracles interrupt the orderly march of events, the steady development of Nature according to her own inherent genius or character. That regular march seems to such critics as I have in mind more impressive than any miracle. Looking up (like Lucifer in Meredith’s sonnet) at the night sky, they feel it almost impious to suppose that God should sometimes unsay what He has once said with such magnificence. This feeling springs from deep and noble sources in the mind and must always be treated with respect. Yet it is, I believe, founded on an error
…
"A supreme workman will never break by one note or one syllable or one stroke of the brush the living and inward law of the work he is producing. But he will break without scruple any number of those superficial regularities and orthodoxies which little, unimaginative critics mistake for its laws. The extent to which one can distinguish a just ‘license’ from a mere botch or failure of unity depends on the extent to which one has grasped the real and inward significance of the work as a whole. If we had grasped as a whole the innermost spirit of that ‘work which God worketh from the beginning to the end’, and of which Nature is only a part and perhaps a small part, we should be in a position to decide whether miraculous interruptions of Nature’s history were mere improprieties unworthy of the Great Workman or expressions of the truest and deepest unity in His total work. In fact, of course, we are in no such position. The gap between God’s mind and ours must, on any view, be incalculably greater than the gap between Shakespeare’s mind and that of the most peddling critics of the old French school.
…
"How a miracle can be no inconsistency, but the highest consistency, will be clear to those who have read Miss Dorothy Sayers’ indispensable book, The Mind of the Maker. Miss Sayers’ thesis is based on the analogy between God’s relation to the world, on the one hand, and an author’s relation to his book on the other. If you are writing a story, miracles or abnormal events may be bad art, or they may not. If, for example, you are writing an ordinary realistic novel and have got your characters into a hopeless muddle, it would be quite intolerable if you suddenly cut the knot and secured a happy ending by having a fortune left to the hero from an unexpected quarter. On the other hand there is nothing against taking as your subject from the outset the adventures of a man who inherits an unexpected fortune. The unusual event is perfectly permissible if it is what you are really writing about: it is an artistic crime if you simply drag it in by the heels to get yourself out of a hole. The ghost story is a legitimate form of art; but you must not bring a ghost into an ordinary novel to get over a difficulty in the plot. Now there is no doubt that a great deal of the modern objection to miracles is based on the suspicion that they are marvels of the wrong sort; that a story of a certain kind (Nature) is arbitrarily interfered with, to get the characters out of a difficulty, by events that do not really belong to that kind of story. Some people probably think of the Resurrection as a desperate last moment expedient to save the Hero from a situation which had got out of the Author’s control.
"The reader may set his mind at rest. If I thought miracles were like that, I should not believe in them. If they have occurred, they have occurred because they are the very thing this universal story is about. They are not exceptions (however rarely they occur) not irrelevancies. They are precisely those chapters in this great story on which the plot turns. Death and Resurrection are what the story is about; and had we but eyes to see it, this has been hinted on every page, met us, in some disguise, at every turn, and even been muttered in conversations between such minor characters (if they are minor characters) as the vegetables. If you have hitherto disbelieved in miracles, it is worth pausing a moment to consider whether this is not chiefly because you thought you had discovered what the story was really about?—that atoms, and time and space and economics and politics were the main plot? And is it certain you were right? It is easy to make mistakes in such matters. A friend of mine wrote a play in which the main idea was that the hero had a pathological horror of trees and a mania for cutting them down. But naturally other things came in as well; there was some sort of love story mixed up with it. And the trees killed the man in the end. When my friend had written it, he sent it an older man to criticise. It came back with the comment, ‘Not bad. But I’d cut out those bits of padding about the trees’. To be sure, God might be expected to make a better story than my friend. But it is a very long story, with a complicated plot; and we are not, perhaps, very attentive readers."
>For some weird reason God was fine with Jesus converting his followers through his wizard powers but really prefers later humans to find faith without the benefit of to just being able to update on statistically significant miraculous results. Sounds fishy.
Miracles reports are quite common, even into the modern day. I'd say about half of the Christians I've asked have told me a story of something miraculous they experienced. 27% of Americans report that they've personally experienced a miraculous healing (https://www.barna.com/research/americans-believe-supernatural-healing/). Dr. Craig Keener wrote a two volume academic work on the subject, finding that miracle reports are historically common and still common today, with millions of people around the globe reporting that they've experienced a miracle.
I've never read Miracles, but it's no surprise that Lewis got there first and explained it better. Thanks for posting it.
Sometimes a lie reveals the truth. It’s generally accepted that Jesus wasn’t born in Bethlehem. It’s only mentioned in two gospels and the census story of moving back to your origins isn’t Roman practice. It would be mayhem. People just didn’t travel to ancestors homelands for a census. The killing of the innocents by Herod is also undocumented.
But an invented messiah can just be born wherever you need him (and the messiah prophecy mentions Bethlehem) but clearly people were aware where Jesus actually came from so they had to admit to Nazareth.
Jesus is very well attested people for his period. The minimum viable Jesus is that he was a popular religious leader from about the class the Bible says he's from who lived roughly where the Bible says he did. That he had a large following and was believed to have magical powers and claimed to be the son of God. That he clashed with Jewish and Roman authorities. And that he was executed but his followers continued on.
If you want to say he didn't exist you basically believe in a conspiracy theory that later Christians went back and doctored a bunch of works and made a bunch of forgeries to provide evidence that he did. A lot of anti-Christians really want to believe this and produce a lot of shoddy scholarship about it. But in all likelihood Jesus was real.
I think my previous belief was that Christianity definitely existed as a religion by the mid-1st-century, lots of people knew the Apostles, the Apostles knew Jesus, and it would require a pretty coordinated conspiracy for the Apostles to all be lying.
Does the evidence from historians prove more than that? AFAIK none of the historians claim to have interviewed Jesus personally. So do we know that the historians didn't just find some Christians, interview them about the contents of their religion, and use the same chain of reasoning as above to assume that Jesus was a real person? Should we take the historians' claims as extra evidence beyond that provided by the religion itself?
Well, it proves that non-Christians living eighty years after the purported events wrote about the life and death of Jesus without expressing skepticism, which is something.
From the way Tacitus writes in 116, it seems like the general consensus among non-Christian Romans in the early second century was that Christus was a real dude who got crucified, and that there was a bunch of weird beliefs surrounding him. This belief was probably not filtered entirely through Christians, just as our ideas about the Roswell Incident of 1947 or L. Ron Hubbard are not entirely filtered through the people who believe weird things about them.
I believe what you're saying is: A large number of Christians all simultaneously, and within their own living memory, attested that Jesus existed. This is strong evidence because otherwise a large number of people would have to all get together, lie, and then die for that lie which seems less likely than being a real religious organization who met a real person. But the historians likely did not personally meet Jesus so they don't add additional proof.
From this point of view, the main things historians add is that it makes it even less likely to be a conspiracy. Because many of the historians are not Christians and drew from non-Christian (mostly Jewish or Roman) witnesses. We don't know who these witnesses were or if any of them directly met Jesus. But they are speaking about things going on in the right time and place to have met him and the Bible doesn't suggest Jesus isolated himself from foreigners.
So either none of them met him and it was all a conspiracy by Jesus's followers that took in a bunch of people who were highly familiar with the region. Or a number of non-Christians were in on the conspiracy.
My broader point is something like: we ought to have consistent evidentiary standards. If you want to take a maximally skeptical view then you can construct a case that, for example, Vercingetorix never existed. You can cast doubt on the existence of Julius Caesar if you stretch. If that's your general point of view then you can know very little about history. I disagree with that point of view but it's defensible. If, on the other hand, you think Vercingetorix existed or the Dazexiang uprising definitely happened but think Jesus might not have existed then I think you're likely ideologically invested in Jesus not existing.
To give an example where I don't think it's bias: most modern historians discount stories of magic powers or miracles regardless of who performed them. So the fact they discount Jesus's miracles seems consistent with that worldview rather than a double standard.
I take your point that lots of historical figures are not actually well documented. But even if there's limited evidence that Vercingetorix actually existed there's also no real reason so suppose he didn't. Since there's so much superstition and so many false claims surrounding Jesus my prior is that his existence is also likely mythological. I'd need stronger evidence to overcome that prior than I'd need to believe Vercingetorix existed.
Do you apply the same standard to Mohammed? The evidence for him is not much stronger. In some ways it's weaker.
(This is also an admission that you're using a higher evidentiary standard for Jesus than other historical figures.)
There’s nothing mythical about the story of the historical Jesus, it’s a guy born of a woman who preaches for a while, comes into conflict with known authorities (Jewish or Roman) and is killed. The mythical stuff can be ignored, it appears in plenty of historical records of the era. Portents, Gods, flaming warriors in the sky, and what not. And that’s just Caesar. But Caesar still existed, right?
And unless you think St Paul didn’t exist - which is a hard ask since he writes letters and is written about in the acts of the apostles (about as much historical records as there can be) then he is the likely and only candidate to have invented Jesus. Paul probably did popularise Christianity but that isn’t the same as inventing Jesus.
Do you think he would get away with making up Jesus and also saying he persecuted his followers who couldn’t have existed?
Besides that Acts has him in contact with the apostles who did meet Jesus, often in conflict with them.
Is Paul invented? Given that Tacitus puts Nero persecuting Christians in Rome by AD 60, whoever made all this up had to invent Jesus, Paul, write the 4 gospels and the Acts, and all of Paul’s letters long before that. Seems a bit fantastical, it takes a lot of faith and a lack of reason to not believe in the historical Jesus.
But you dont need to invent him. Erusians minimal story is something I expect to have happened multiple times in roman judea (depending on how large a following were talking about). Mythological additions to religious founders are normal. Ive been an atheist basically since I had a real opinion on the topic, and I was genuinely surprised to learn that people argue this.
Yes. If you're an atheist it's entirely sufficient to say, "Jesus was a real person. However, he was not supernatural." Instead for some reason they want to assert Jesus was entirely mythological.
Someone later down made comments that reminded me that some figures from history were later believed to have been adaptations or syncretisms of earlier figures. So that's another possibility - Jesus was fictional, but melded from earlier people. I don't think this would adequately explain Tacitus' account, for example, but it could explain multiple people being "in on" the fabrication.
(Meanwhile, maybe some people aren't invested in Jesus' not existing, but rather invested in someone existing with a name as cool as "Vercingetorix". So the real solution should have been to introduce Jesus as, uh, "Yesutapadancia".)
Jesus is a bit similar to Ragnar Lodbrok in that he is attested but a lot of the records come shortly after his death. And there's a whole bunch of extremely historical people who the history books say were reacting to him and his death which are really hard to explain if he didn't exist or was a myth.
The people who think Ragnar was entirely fictional have to explain the extremely well attested historical invasions by his historically well attested sons who said they were avenging his death and who set up kingdoms and ethnicities which echo down to today. Likewise with Jesus, his disciples, and Christianity.
But there's just enough of a gap to say that maybe he didn't exist if you really, really want to. And there's a lot of space to say some of the stories were less than reliable and some of them might be borrowed from other people. Then again, that's true of most historical biographies.
We should take the historian's claims as evidence that the people whose job it is to professionally try to figure out what happened in the past all tend to agree that Jesus was real. And they're not just looking at the Bible when they do that!
Sources that indicate Jesus existed include the scriptures (the letters and gospels of the New Testament), but also include many of the apocryphal writings (which all agree that Jesus existed, even if they go on to make wildly different claims about him), the lack of any contemporary non-Christian sources that deny the existence of Jesus, the corroboration of many other historical facts in scripture about the whole Jesus story (like archeological findings corroborating that Pontius Pilate existed, or that Nazareth existed, etc).
You also have Josephus writing about Jesus in 94 AD, Tacitus writing about him in 115 (and confirming that he was the founder of a religious sect who was executed under Pontius Pilate), and a letter from a Stoic named Mara bar Serapion to his son, circa 73 AD, where he references the unjust execution of the "wise king" of the Jews.
Also, looking at scripture itself there are all kinds of historical analysis you can apply to it to try to figure out how old it is, and whether the people who wrote it were actually familiar with the places they were writing about. For example, they recently did a statistical analysis of name frequency in the Gospels and the book of Acts, and found that it matches name frequencies found in Josephus's contemporary histories of the region, and that later apocryphal gospels have name frequencies in them that don't match, which makes it more likely that the Gospels were written close to the time period they are writing about (https://brill.com/view/journals/jshj/22/2/article-p184_005.xml). Neat stuff like that.
One major source, which is much disputed, is the Testimonium Flavianum which is the part of Josephus' writings which mentions Jesus. Josephus was a real person who is well-attested, so if he's writing about "there was this guy" it's important evidence, especially as he ties it to "James, the brother of Jesus" who was leader of the church in Jerusalem and mentions historic figures like the high priests at that time.
How much is real, how much has been interpolated over later centuries by Christian scribes, is where the arguing goes on - some say it's nearly all original, others (e.g. the Mythicists) say it's wholesale invention.
https://en.wikipedia.org/wiki/Josephus_on_Jesus#The_Testimonium_Flavianum
Tim O'Neill has an interview with a historian who recently published a book about this, arguing for the authenticity of this:
https://www.youtube.com/watch?v=9L2bE1-pyiU
"My guest today is Dr Thomas C. Schmidt of Fairfield University. Tom has just published an interesting new book through Oxford University Press: Josephus and Jesus – New Evidence for the One Called Christ. In it he makes a detailed case for the authenticity of the Testimonium Flavianum; the much disputed passage about Jesus in Book 18 of Flavius Josephus’ Antiquities of the Jews. Not only does he argue that Josephus wrote about Jesus as this point in his book, but he also argues that the passage we have is substantially what Josephus wrote. This is a distinctive position among scholars, who usually argue that it has at least be significantly changed and added to, with a minority arguing for it being a wholesale interpolation. So I hope you enjoy my conversation with Tom Schmidt about his provocative new book."
I've not watched that video, but in this one Tom Schmidt goes into Josephus' life and connections with Jewish and Roman elites:
https://www.youtube.com/watch?v=8jpEleZV1Pw
The most surprising thing (for me) was to learn about Josephus' rather energetic life, and that Josephus knew people who were one or two degrees of separation from Jesus. It puts a new shine on the questions of the Testimonium's accuracy.
I mean, when the Mythicists claim Jesus never lived, are they also saying that his brother James (mentioned by Josephus and several other documents) was also a fabrication? Mary, Joseph, and Magdalene, all wholly fictional characters? Where does the myth-making and conspiracy start and end?
Why would Romans and Jews of the era readily agree to Jesus' existence? There were a number of mystery cults and sects, Jewish and otherwise, around the eastern Mediterranean at the time. Why go out of their way to claim the person at the center of this particular one existed if he didn't?
This isn't merely a single ethnic clan (the early Christians) circling around a myth. This is documentation from two groups who have no interest in spreading Christianity, a long history of bloodshed between each other, and one of those groups later persecuting Christians.
I think you're well overstating the minimum. Yeah, there was someone with that name around. There aren't any records of the trial though. (There's an explanation for the lack, but they're still missing.) And it is *very* clear that "later Christians went back and doctored a bunch of works and made a bunch of forgeries", though we don't know what the original records said, or even if they existed. Sometimes we have good evidence of their doctoring the records. Often enough to cast suspicion on many where we don't have evidence. Many were clearly written well after the date at which they were ostensibly written.
If you wanted to claim that he was a popular religious-political leader, I'd have no argument. There's a very strong probability that he was, even though most of the evidence has been destroyed. (Some of it explicitly by Roman Christians wiping out the Nazarenes.)
There’s a lot of have waving there but no specifics. The only possible case where Christians modified is parts of Josephus. That’s it.
Yeah, the "hand waving" a valid criticism. It's been decades since I took the arguments seriously, and I don't really remember the details. But when you say " The only possible case ", I'm not encouraged to try to improve my argument. Your mind is already made up.
Would you be encouraged to try to improve your argument for the sake of an interested third party? In a public comment section like this you're never solely writing for the person you responded to, and I for one would indeed be quite intrigued to hear more specifics about your case, as I don't have any particularly strong opinions on the subject already.
The problem is that, IIRC, the evidence is ambiguous enough to allow nearly any conclusion that one desires. If one is biased in one direction or another, the evidence supports that direction, though few think that it's strong enough to constitute proof. And the name is not the thing. (There were lots of people named "Jesus", i.e. "Yeshua" in Israel.)
It's been decades since I took the argument seriously enough to look into it. I decided that there wasn't sufficient evidence to conclude that Jesus was any particular real person, but was more probably an amalgamation of several religio-political agitators. It would take a long time to reconstitute my reasoning in detail, and it was never strong enough to convince someone who held strong opposing beliefs. IIRC, I did decide that the main character in the composite figure was born around 33BC, but I don't remember why.
Additionally I've been part of small groups, and noticed the way they alter their oral history to add things that weren't there, and to remove things that were embarrassing. Sometimes the changes happen within a period of weeks, as enemies become allies. Admittedly the evidence I have for the alteration of written history is certainly later, but one might not only consider the "pieces of the true cross", but also the actions of the Council of Nicaea. And things like Epistles to the Corinthians are basically the same as propaganda, and should be considered equally reliable. Being old does not make something more trustworthy.
> There aren't any records of the trial though.
There are records that say he was executed by local authorities. The specific Biblical details are less well attested.
> And it is *very* clear that "later Christians went back and doctored a bunch of works and made a bunch of forgeries"
Every time I've pushed on these claims it comes down to the equivalent of not being to able to prove a negative. It's clearly there in the versions we have and they make some vague gestures about word choices to show it was inserted later. I'm not aware of a single smoking gun where someone admitted they doctored a record from the time.
I am especially suspicious of this because it's clear a lot of people WANT to believe they are later insertions for basically ideological reasons. But if you have an example that is either a smoking gun, like the evidence we have about the Austrian archduchy title or better, then I'd love to see it.
> There are records that say he was executed by local authorities
Isn't Josephus the first one to mention this? I don't think the Romans themselves left surviving records of an execution they would not have regarded as especially significant at the time.