Just read the post back on SSC about vortioxetene, can confirm, causes nausea, cost an arm and a leg, and does seem to very mildly improve cognition beyond the "I'm not depressed and having panic attacks" level. I'm also taking it in conjunction with bupropion, which given the synergistic effects there means I'm probably taking effectively 1.5x the maximum recommended dose, but I'm large and it's keeping me employed, so that's a plus. I did try dropping the dose of vortioxetene to compensate for that but it didn't go well. I'm curious about this new bupropion + dextromethorphan protocol, because that would be *vastly* less expensive.
B) Card Game: Predictably Irrational - Feel free to bring your favorite games or distractions.
C) Walk & Talk: We usually have an hour-long walk and talk after the meeting starts. Two mini-malls with hot takeout food are easily accessible nearby. Search for Gelson's or Pavilions in the zip code 92660.
D) Share a Surprise: Tell the group about something unexpected or that changed your perspective on the universe.
E) Future Direction Ideas: Contribute ideas for the group's future direction, including topics, meeting types, activities, etc.
TL;DR: Scott, what do you think about the "UFO hearings" and the value of investigating the reports they're based on?
Haven't been a consistent reader/fan for long, it's more been a slow burn since college friends started recommending the (original) blog and pieces from it to me several years ago, so not sure if this is the right place to ask for scott piece, but here goes (I am addressing Scott himself, but others are free to comment as well):
There's a whole deal right now about Congress getting the Pentagon to investigate "unidentified aerial phenomena" or "UAPs" (aka, UFOs, at least nominally in the original "unidentified" sense rather than the euphemistic "alien spacecraft" sense). I read a few articles* about it from fairly credible** mainstream sources that made me take it more seriously than I generally would. The short version seems to be that there do seem to be real objects that occasionally pose threats to fighter pilots who have come near to colliding with them, and move in strange, unpredictable ways, which deserve investigation at least for the purposes of a) ensuring the safety of said pilots, and b) addressing national security in the case that these are, say, highly advanced aircraft using secret technology employed by forces (like rival nations) opposed to US interests. Even setting aside the in my mind much more dubious hypotheses of extraterrestrial origin, and whether or not one feels that US interests are worth defending, these seem like a legitimate basis for serious investigation.
Given your past-stated interest in taking "conspiracy theorists" seriously and debating in a level-headed, rationalist way with them rather than dismissing them out-of-hand, this seems like a topic you might consider investigating or commenting on, even if you personally *do* believe that even the reports than make no claims of extraterrestrial origin are essentially bullsh*t (people making things up or mistaking mundane objects like balloons for some mysterious advanced enemy aircraft).
So basically: I haven't investigated this much further than reading these few articles and watching the referenced videos, and I'm interested in seeing you do a deep-dive post on this to see what you find/think of it, or at least hearing you reply in a comment what your opinions on the matter are.
1. Never ever use the term "UFO". First, because too many people will assume you mean "alien spaceship". Second, because even if you are correctly understood, you are presuming that the thing is a material object when all you have is an image or a perception, and that will prejudice the analysis.
2. Someone should quietly investigate the UAPs for which sufficient data exists.
3. Everyone should ignore the people excitedly hyping UFO investigations; we don't have the results to justify that yet.
Why are people worried about running out of training data for LLMs? Isn't a huge amount of new training data being dumped onto the internet each day by humans posting things on the internet?
So, and I'm definitely open to correction here, but as I understand it it's a combination of two things:
First, marginal improvements to LLMs typically require an order of magnitude more data. For example, if you have a model that's 94% accurate using 10 million rows of training data, going to 100 million rows might only get you to 95% accuracy. So when people talk about running out of data, they're not looking at doubling the training data, they're looking to 10x or 100x it.
Second...we're generating data, a lot of data, but not that much compared to all the data that currently exists. For example, Twitter produces a ton of text data but it's also been out for what, 15 years now. It can probably add more users but everyone who could be online is pretty much online by now. So Twitter, almost by definition, can't grow the total amount of tweets ever made by 20%-30% or so a year. More realistically, we'd be lucky to increase the total amount of text data ever created in human history by 2-3% a year and that's not exponential growth, that's relative to current production.
So image that x is all the text data ever created in human history, which is basically what ChatGPT was trained on. In order to improve ChatGPT with more training data, we need to get to 10x, or ten times all the text data that currently exists. If we're essentially at maximum text generation now, meaning we generate 3% of all text data ever for every single year we exist, then it takes us 9/0.03 or 300 years to get to 10x the current amount of text data. Or, to rephrase, if we, as a species, have written 100 gajillion words and write 3 gajillion words every year, it will take us 300 years to have written 1000 gajillion words.
So, broadly, people see where lots of text is being generated, they don't see where the next 10x or 100x every word ever written will come from.
> Isn't a huge amount of new training data being dumped onto the internet each day by humans posting things on the internet?
Yes, today. Tomorrow, it will be a huge amount of data mostly generated by LLMs.
Imagine future SEO and made-for-AdSense websites. The easiest way to set up one will be to choose the keywords you want to focus on, and let the AI generate a website with thousands of generated articles containing those keywords, hyperlinking each other. Then share a few of those articles on social networks, to generate incoming links. This is that most of the internet may look like, soon.
If you keep training LLMs on data which were mostly generated by LLMs, instead of better learning the human speech and thought you will get reinforcement of whatever quirks the LLMs already randomly obtained. For example if one AI will invent a new word as a result of a bug, and will use it in 0.1% of generated web pages, the next generation of AIs will learn it as a valid word.
At some point in the future, I'd like to see a derivation of the Student-T distribution from the Gaussian distribution. My intuition tells me that when a statistician tries to infer a population distribution from a sample, he or she considers every single possible population distribution, and weighs each population distribution against the probability that the sample could have come from that population.
I like to imagine that when polling companies calculate error margins, they imagine an infinite collection of parallel universes with different populations, each of which gave a sample of poll respondents with the same answers to their poll, and weighed each parallel universe accordingly.
If the math bears that out, that would be an excellent way to visually explain to laymen why a sample of 1000 people can accurately portray the opinions of millions of people.
Honestly, seeing so many of these pieces written in these journals alongside other ones talking about how everyone secretly wants to come back to the office and there's more productivity there and how the commercial real estate apocalypse is upon us makes me think it's just a hit piece
I dunno about "secretly" but I've been quite vocal about my desire to work in an office instead of at home. I do a much, much, much better job of separating "life" and "work" when I am not 30 feet from "work" 24 hours a day. But this is a "me" thing, I have no desire to impose offices on anyone else. ;)
Looks like it's an Archive copy of a paywalled Wall Street Journal article, about people sending remote jobs to India in response to their current US employees wanting to move to different states.
Do Historians have a record of being above average predictors? Has this ever been studied?
If the answer is 'no', what is the value of historical scholarship beyond intellectual curiosity? Fair enough if you say that is in fact History's only value, but Historians carry on about how important the teaching of History is for society and politics etc. But if even being an expert in history does not give you better insight than 'the market' in predicting the future, then how could it possibly be of value? If you can't use history to make better decisions (which requires anticipating future changes in society and world events), then how could it have any instrumental value?
If you say the value is something that can't be neatly quantified like this, then that at best makes claims of History's value unfalsifiable.
History provides context, which is valuable in and of itself. Even if it doesn’t help predict specific outcomes (it may or may not; I’m making a different point here), it does illustrate the range of potential outcomes. When you look at events like the invasion of Ukraine, when they first happened, lots of people were in denial that such events were even possible, despite the fact that such events are, in the long run, utterly commonplace.
To beat a dead horse and reiterate in slightly different form the point made by VT and WoolyAI, historians can have value without being good at making predictions.
Historians investigate data (primary and secondary sources, archeological info, etc.) and try to piece together factual accounts of what actually occurred in the past, including by correcting mistakes in our understanding of the past which may be caused by chance/decay (we forgot about something and/or stories become mythologized with accrued false information over time) or purposeful deception ("history is written by the winners" can often mean that the only accounts of an event are ones that have been purposefully altered to suit the interests of those who won in a given power struggle, until more info that was hidden or can be inferred from archeology or some other source is dug up and analyzed by historians).
Basically, historians collect good data about what happened. Analyzing that data in order to predict future events is not really their job.
(Although they may often dabble in it personally or professionally, and may often by biased by their beliefs about what will/should happen in the future.)
To make an analogy to machine learning/AI, a subject popular on this blog, the historians generate the data that goes into the model, they aren't the model that gets trained on the data.
Even for the apparently simple question of explaining the observed differences in economic prosperity between countries there are several competing explanations (geography, see e.g. Jared Diamond’s books; institutions, Daron Acemoglu; culture, as in Max Weber’s point of view, and others). I would go out on a limb and say that none of these theories are predictive. Others may be. Yet all of these theories build on historical knowledge at various levels and timescales. Historical data is like bricks: you need to arrange them properly to build a house. But the fact that no one managed to do that yet does not negate the value of bricks.
Forgive me if I do not know the official terminology for what I am about to say, but I think you are substituting a strong claim (knowledge of history is important to understanding current society) for a weak claim (historians have a better understanding of current society), under the assumption that the latter naturally flows from the former. I do not believe this to be the case, and I think you are making the mistake of taking our current level of civilization for granted.
Specifically, I believe that knowledge of history has diminishing returns in applicability to society, politics, etc. That is, a historian would not necessarily make better predictions than someone with a layman's knowledge of history, or slightly more, and potentially high levels of other types of knowledge which the historian may or may not possess, let alone a crowd. However, I would certainly expect someone with, say, a high school or college-level knowledge of history to outpredict someone who grew up on a "desert island" or similar situation and has no knowledge of the past beyond their own interactions with others and experience of their community.
This naturally leads to the question of why academic history is important, and, like many other academic fields, I imagine most published work does not affect the world in a major way. However, academic history does lend itself to, for example, the finding of errors in previous accounts, which may influence our perception of the world. For example, it is a common misconception, whether or not those who seem to believe this fully acknowledge it, that native North Americans were living peacefully until Europeans came along. This conception lends itself to the idea of different racial characters, and particularly to the idea that whites/Europeans are more warlike and/or morally inferior to native North Americans. However, in reality, native North Americans, being human, did all the classic human things of waging brutal war, forming states and using them to oppress people, raiding each others' villages, committing acts of unspeakable cruelty, so on and so forth. This knowledge lends itself to the idea that the myth of European racial character is just that, native North Americans are not fundamentally different or superior beings, and at the end of the day, there are always people who seek power and use what means they can to enforce it.
Could you explain why you are posting this link? I'm aware of the fallacy, but don't see why I should be accused of engaging in it- particularly given I specifically acknowledged that the academic field of history does not seem to me to consistently be on topics highly relevant to the modern day. I do not think this is necessarily a problem, per the intellectual curiosity argument, but certainly the idea of learning from history in a sense that allows for prediction is not the driving focus of all or probably most historical scholarship. OP was talking about the value of history as a whole, and suggested the predictive powers of academic historians specifically as a metric by which to gauge this value. I stated why I feel this is an incorrect framework.
Thank you. I suppose it was, but I wanted to be as specific and objective as possible- invoking motte and bailey suggests bad faith, which I suspected but could not confirm, and I don't know if it's quite what I was looking for given OP wasn't exactly retreating. It seemed more like an isolated demand for rigor (as I imagine OP would not declare fields they were more favorably disposed to useless in applications to society/politics/etc unless their experts were also expert predictors, though this was somewhat tailored to the specific statements made about the importance of history), but I can't confirm that with only the one comment. I can confirm, at least to my satisfaction, that the measure suggested was neither useful nor fair.
The value of historians is not in being good predictors themselves but in building good predictors, among other things, at least in the classical education sense. The teacher is not judged by their own competence but by the achievement of their students.
So, to piggy back off a recent Kulak Revolt post (1), consider a professor at Harvard in 1910 assembling and compiling a list of the greatest speeches in history. There's nothing in that job description that would make you, ya know, good at making speeches. That job requires a lot of digging through archives, reading and judging contemporary reactions to those speeches, comparing those effects across time, etc. At the same time, if you're an aspiring young politician, that's probably one of the most valuable books you could read.
Now that leads to the issue of, is the book making the Harvard student good, or is the Harvard student good because he's the kind of guy who can go to Harvard...and whether Harvard is actually providing a classical liberal education anymore, but whether the historian is an effective predictor isn't...really what we typically look to historians for.
Which pretty conclusively shows that liberal talking points about flaws in the American education system are false, and that overall American underperformance relative to the best countries in the world is entirely a product of non-Asian minorities dragging the scores down.
And since there's no country in which unselected black or hispanic etc. populations score as well as western whites, there's absolutely no reason to assume that this underperformance has anything to do with the American educational system or that these people would do any better in the "superior" educational systems of Japan or South Korea or anywhere else (and besides, substantial intellectual gaps exist before education even starts anyway) .
This is unlikely to convince anyone that the US has a good education system unless they already believe in a very strong form of HBD, because you're taking out a disproportionate amount of the lowest-income members of society by limiting yourself to American whites. If you did the same thing (purely from an income or wealth adjusting perspective) to the Japanese population, I'm sure their PISA scores would go up too.
That’s not necessarily true. You could explain racial achievement gaps in US schools with structural racism if you wanted to, or with cultural factors. It’s still not a fair basis of comparison because Japan doesn’t have a sizable minority population to be structurally racist against in the first place, or who suffer from the same cultural factors--at least, not sizable enough to affect these statistics.
You mean, basic, BASIC, mainstream science showing that intelligence is heritable?
This idea that mainstream, replicated to death intelligence research is some radical, heterodox ideology is a complete lie.
> If you did the same thing (purely from an income or wealth adjusting perspective) to the Japanese population, I'm sure their PISA scores would go up too.
But white americans are a distinct population, with rich and poor and middle class people. It's not an artificially constrained population, it's all white people warts and all. And there's more income/wealth diversity amongst American whites than Japanese people! Meaning if you're going to cling to low income, you cannot possibly explain the white american/japan PISA gap.
And black americans do much, much worse than countless lower income populations around the world. Meaning it's trivially true that income does not explain black underperformance.
And like I already said, intellectual differences between blacks and whites mostly exist already before schooling even begins!
You’re trying to compare two schooling systems. Hypothetically, say the difference in race outcomes for schooling is purely socialized/downstream of income or something like that. In that case, you removed a bunch of people tending toward the bottom of income (non-whites), and PISA scores improved, rising above Japan’s. That’s entirely consistent with a pure income story.
Now, if you believe testing outcome drivers are just race + school system, then it would be a massive result, but I don’t think most people believe that. If you could show that, after controlling for income/poverty, the white American Pisa scores remained unchanged, that would make the result much more interesting. Otherwise you’re letting the American system just throw out its bottom tier schools and students.
I mean you have to believe "race itself is the driver of differing school/iq outcomes, to the exclusion of other factors like income (which are just downstream of race)." If not, it doesn't make sense to compare adjusted US numbers to unadjusted Japan etc numbers.
There's no evidence that income causes schooling outcome differences, and countless poorer populations aroudn the world do much bette on PISA than american blacks.
Don't tell me that you really believe that poor kids are served just as well by the American education system as rich kids?
Now, as to around the world, it's easy to forget that different countries and different education systems really are hugely different. In many cases, you're comparing incomparable things. I come from a communist country where everyone except a tiny minority was poor, and all schools (except a few specialized ones and schools for kids with severe cognitive issues) had the same math textbooks that the military insisted on in order to get enough engineers to keep the weapons programs going. So everyone got, for example, geometry with proofs - and you had to successfully pass everything to graduate from one year to the next. Your median kid would thus be really poor but not too shabbily math-educated, whatever his IQ, and your median math teacher really understood what they were teaching. Now, you'd be comparing this situation to the US situation, where every school picks its own textbooks - and the poor kids tend to be the ones stuck with the garbage textbooks and crappy teachers, because their parents can neither move them to another school nor push back.
It seems to me that you're comparing apples to oranges and drawing conclusions from this not very informative comparison.
Firstly, there aren't racial disparities in the US education system. There are racial disparities in academic ability.
Secondly, there are huge racial academic disparities in ALL countries around the world. The US is just made to look bad because we have far more low IQ minorities than any other developed country.
Black people do bad in school in every country on earth (where there's a meaningfully large, non-selected population). There's precisely ZERO evidence that this has anything to do with education quality in the US, and precisely ZERO evidence that they would do better in Japan or South Korea or Finland or anywhere else on earth.
If Finalnd or Japan had 13% of their population blacks with the same intelligence distribution as American blacks, there's no reason whatsoever to think that this wouldn't similarly drag down their scores the same way it does in the US.
Nobody anywhere in the world has developed an education system that can take low IQ black students and raise them up the same academic ability of whites or north east asians, so the fact that the US happens to have vastly more black people than Japan or South Korea or Finland says nothing about US educational quality.
"Nobody anywhere in the world has developed an education system that can take low IQ black students and raise them up the same academic ability of whites or north east asians". You mean, someone has developed an education system that can take low IQ non-black students and raise them up to the same outcomes as average IQ students, but black students are somehow not susceptible to this? Just checking.
>Sub-Saharan Africans are more genetically diverse than the entire rest of the world combined, therefore if academic achievement were closely tied to genetic racial traits, you'd expect them to have far more variation in results than any other groups.
There's no reason total genetic variation should necessarily result in greater intellectual diversity. You need diveristy in the genes that actually influence intelligence, and you cannot infer this diversity from total genetic diversity.
>If we instead observe that they do consistently worse than other groups, then that rules out a genetic explanation.
Absoltuely false. If no african populations were subject to selection pressures for higher intelligence, then these genes will not be predominate in any African population.
Just think about it: All sub-saharan African populations have darker skin than all indigenous european populations. By your logic, the "greater genetic diversity" of Africans means this cannot be caused by genetics! By your logic, "greater genetic diversity" predicts greater diversity in skin color than Europeans (or the rest of the world), but this is trivially false. Total genetic diversity does not predict phenotypic diversity
ALSO, the within group heritability of intelligence for whites is almost identical for that of blacks. This is the opposite of what we would predict based on your logic.
>Unless of course you are lying again and black people do not in fact systematically do much worse than white people academically in every country:
Sorry for posting a video, but the issue of UK schooling performance cannot be explained in a few sentences.
Though I will say, this contradicts the unending claims of the UK being a "racist" and "white supremacist" country by leftists.
>(Also, calling the African-American population "non-selected" is farcical)
Says who? Do you think dumber Africans were enslaved by other, smarter africans? What if it was just the Africans better at fighting and/or more numerous who enslaved other africans?
Also, black americans average 25% white ancestry, and because race mixed individuals on average have an IQ somewhere close to the average of their parents, this means that black american intellectual ability has been greatly boosted compared to the original population.
> There's no reason total genetic variation should necessarily result in greater intellectual diversity. You need diveristy in the genes that actually influence intelligence, and you cannot infer this diversity from total genetic diversity.
More variability in total genetic diversity entails more variability in the genes that influence intelligence too, all else being equal. You have to demonstrate that we have some reason not to expect all else to be equal, and until you do, this does seem to undermine your argument.
> Just think about it: All sub-saharan African populations have darker skin than all indigenous european populations. By your logic, the "greater genetic diversity" of Africans means this cannot be caused by genetics!
But we have a legitimate physical justification to explain this difference, per the above, namely the protective effects of melanin against UV damage. No such argument exists for intelligence from what I've seen.
What a useless comment! You made factual claims, he responded with factual rebuttals, and instead of simply acknowledging that you were wrong, and moving on as a better informed person, explaining why you think your point is actually valid, or just not responding, you resort to vague (and not even true, as far as I can tell) ad hominems about how someone goes to great length to believe things, and how offended they supposedly have become.
On the subject of bans: Do you ever plan to make a list of warned and banned users, with an explanation of what they were warned/banned for? SSC had one and I found it helpful, both for getting a sense of what wasn't acceptable in the forums and for seeing how responsive you were to people's complaints.
I’d have to guess it isn’t up to date, if it has six bans from 2 consecutive days in 2021 and no other bans. (Unless you want to say that it is an up-to-date list, which accurately shows every ban that Scott could be bothered to report).
The WaPO's media critic is out today with an interesting take on the Dominion Voting Systems-Fox News defamation trial which, apparently, is finally underway.
The judge has already ruled, and Fox News does not contest, that Fox News hosts made many on-air false statements about Dominion in the weeks after the November 2020 election. The question that will go to the jury is just whether Fox News did so with "actual malice", i.e. the legal standard established by the SCOTUS in its "Sullivan v NYT" ruling in the 1960s. That requires proving that the media outlet's key staff knew they were lying about Dominion on the air and consciously chose to do so anyway. Hundreds of pages of emails and texts already admitted into evidence do appear to provide that proof regarding both Fox executives and on-air hosts (and that's just the stuff that's yet been made public).
The "Sullivan" standard has lately come under increasing pressure from conservative jurists and politicians, including at least two current SCOTUS justices, who are on record saying that it should be scrapped. Defamation is defamation, is the argument, and making it harder for public figures or companies to get relief for being defamed amounts to giving the media a free pass.
Conservatives of course have in mind news outlet like the NYT and CNN and others who they believe regularly take advantage of "Sullivan" to defame people on the other side of the culture war, starting with but not limited to Donald Trump. Meanwhile liberals and progressives are openly cheering Dominion in this lawsuit against Fox News which of course has been their bogeyman for a quarter-century now.
If Dominion wins and eviscerates Fox News (they're seeking billions in damages) then presumably liberals will continue to say that the "Sullivan" standard works, while conservatives will view this lawsuit's outcome as simply more evidence of a double standard in the courts. So no particular change in the politics of U.S. defamation law.
If on the other hand Fox News successfully uses "Sullivan" to save itself from what, based on the evidence yet made public, seems to have been pretty blatant knowing defamation....would liberals then still support "Sullivan"? Would conservatives then still want to get rid of it?
Would those attitudes withstand the inevitable public release of the trial evidence that hasn't yet leaked out?
_That_ outcome -- Dominion having been pretty clearly defamed but having no recourse due to the "Sullivan" standard -- could make things newly interesting with regard to the politics of U.S. defamation law.
If Dominion loses this case because the jury doesn't find “actual malice,” I would interpret that as meaning that either (1) the evidence is less clear than I think it is based on the summary judgement findings, or (2) the justice system failed in this case. The first wouldn't be a reason to question Sullivan. The second would be a reason to question Sullivan only to the extent that it was evidence that the Sullivan standard was unworkable in practice. A single case would be enough to raise questions about that, but it would take multiple cases to persuade me that Sullivan was in fact unworkable in practice.
Talk of billions of dollars in damages is pure speculation at this point. Dominion did ask for $1.6 billion in their complaint. All that Dominion sought in its motion for summary judgement was a determination that it had been libeled per se. To support this motion, argued that Fox had made false statements with actual malice, so we know what Dominion's evidence for that is. They did not argue that they had suffered any particular amount of damages, so we won't know what their case for damages is until they present it to the jury.
Ah I forgot that this is actually just the first of two lawsuits. The one from Smartmatic, making the same arguments as Dominion, is supposed to go to trial later this year. Dominion's lawsuit is being tried in Delaware, Smartmatic's will be in New York. So another potential defamation-law-politics wrinkle is: what if Fox News loses in one venue but wins in the other?
UPDATE, well we're not going to get to any new "Sullivan" case law here - Fox News just settled with Dominion. Key points:
-- Fox News has agreed to pay $787.5 million in damages which at least ten times the total value of Dominion Voting Systems as a company. Elie Honig, former assistant US attorney for the Southern District of New York, called the settlement amount "astonishing."
-- in its statement announcing the settlement, Fox News said, "We acknowledge the Court’s rulings finding certain claims about Dominion to be false. This settlement reflects FOX’s continued commitment to the highest journalistic standards."
-- Dominion's lead attorney called the settlement "vindication" as well as proof that "The truth matters, lies have consequences." Dominion's CEO said, "Fox has admitted to telling lies."
-- Dominion's defamation lawsuits against right-wing news networks OAN and Newsmax, and against certain individuals including Rudy Guiliani, Sidney Powell, and Mike Lindell, are unaffected by this settlement. Smartmatic's defamation lawsuits against Fox News and against most of those others are also not resolved by this settlement.
My previous comment was obsolete before I posted it. The amount of the settlement suggests that Fox realized that if it went to trial, it would lose, because a jury might well have awarded less.
Yes, clearly. Realistically even if a jury had awarded that amount the judge would have reduced it. Fox concluded that having its behavior fully aired in a trial was an existential threat, whereas paying this settlement isn't.
Nowhere near half the country gets its news from Fox. Fox has strong daily viewership only in comparison to the other cable-news channels.
But anyway the Fox audience will literally hear of this one way or another (a relative will mention it, CNN will be on when they're walking through an airport or someplace, whatever). They'll just dismiss it as either fake news or Deep State something something something.
(Not being hyperbolic here, I have Fox News-watching relatives and that is literally exactly what will happen. Maybe already has happened as I type this.)
I’m all for lively discussion about rules of the community, but Scott could we make an effort to constrain discussion to subcomments in a single place? Maybe like how you do in the Classifieds. It seems that whenever you pose a question in the open thread notes, almost half the thread comments are in response, and it makes scrolling through comments less fun.
How about this as a solution: No refunds, but *all* bans are temporary. However, the duration of the ban increases for repeat offenders - exponentially so for non-subscribers, but only linearly for paid. Also, in the moderation policy blurb, make a strong suggestion that folks prone to offending pay a bit extra for the shorter term subscriptions (which will be canceled when the ban term exceeds the subscription term)
Excellent point, particularly given that the comments section here seems to be of the old-school variety that shows things in time order, rather than attempting to sort by usefulness/popularity/controversy/attention/profitability (for better and for worse).
I’ve often thought this about regular posts, especially AI ones but with examples under every subject, where ten people will post basically the same comment. I bet Scott could anticipate most of these and write a top-level comment for them, and then pin those comments (to give them enough Schelling pointness that lazy commenters expect more attention by using them than by posting to the main thread).
I'm all for it when its an interesting broad topic. But this one struck me as a procedural matter, more like when there was a huge amount of comments one open thread about whether people preferred the light blue background color to the white one for the website.
But now that I re-read what Scott wrote, I see there's actually no question in the post! If I remember, next time I see an open thread prompt like this I'll leave an early comment saying "Thread for people commenting on topic Scott brought up" and see how it pans out.
Scenario: An oligarch, Middle Eastern oil tycoon or eccentric billionaire dictator builds an exact replica of the WTC twin towers in an important city in his home country. What is the reaction from various people across the world?
Of course agent-targeted full-world-model-based long-term-planning AIs can be more inventive in their damage, but it doesn't look like the AI companies are doing _exactly_ that or want to, does it?
Elon Musk: «a maximum truth-seeking AI that tries to understand the nature of the universe», «unlikely to annihilate humans because we are an interesting part of the universe»
It’s really weirding me out that no-one’s come out in favour of giving more leniency to paid subscribers than to free accounts. Probably the majority of paid accounts are long-term users of the blog, rather than newcomers intending to troll. Those people have supported the blog with money and by being part of the community, and removing them from those roles would on average do more harm than removing a typical free account. Long-term readers are also more likely to be responsive to social pressure from Scott, so the alternatives to a ban are likely better. Presumably Scott only uses bans in extreme cases or after someone’s ignored correction the first time anyway, but for any given pattern of misbehaviour I think more leniency is appropriate for paid subscribers than for free accounts. Scott’s stated leniency at the margins but hammer in clear cases seems right to me. (I’m not currently a paid subscriber.)
Lots of people have similar complaints, so I’ll reply here and you can see my reply to Deiseach for more. I don’t want people to be immune to bans if they pay. But I think Scott wants to ban people more than he does, and I think the fact this remains one of the better comment sections on the Internet is holding him back - he doesn’t want to stifle that atmosphere. As a solution, he could let paid subscribers accrue 150% or 200% of a ban before he actually bans them. This would let him grow stricter on everyone else for the same amount of risk of alienating his core fanbase.
I think the free accounts get plenty of leniency already; by the time they cross over into bans there's usually obvious intent to pick a fight. So any additional leniency would be leniency to obvious intent to picking fights.
Eh, I'm part "with great power comes great responsibility" and part I don't want "one law for the rich and another for the poor".
Paying a voluntary subscription should not buy immunity from Da Rulez. I've shot my mouth off before and eaten bans for it, and that's fair; I don't want "oh but she pays a sub, so she should get special consideration".
(1) If we've been around a long time then we should know the mores on here
(2) Simply paying a sub should not be seen as a license to troll, which some new users might do if they see "one law for the paid subs, another for the freebies"
Other members of the community intervening to ask for mercy when it comes to potential banning is a different matter, but simply "I paid to be on here so I can say whatever the hell I want" isn't the standard.
I agree paid subscribers shouldn’t be functionally immune to bans. Scott already semi-quantifies the bans, maybe subscribers should get 150% or 200% a ban before they actually get banned? That would be slightly “another rule for the rich”, but in a transparent, limited way. (Any fraction of a ban someone has before they start subscribing should be deducted twice.)
If you’d been told “You’ve earned 100% of a ban, but you’re a subscriber so I won’t ban you until 150%”, would you have been less deterred from your nefarious actions? Would you have acted out more if you knew you had 50% more free rein? That doesn’t seem likely to induce much extra bad behaviour.
Bear in mind Scott probably wants to tighten the rules, but doesn’t want to get significantly harsher towards subscribers. Scott can either crack down less hard on paid subscribers, or he can crack down less hard on everyone.
"If you’d been told “You’ve earned 100% of a ban, but you’re a subscriber so I won’t ban you until 150%”, would you have been less deterred from your nefarious actions? Would you have acted out more if you knew you had 50% more free rein?"
I think if I was gambling on leniency, I would be more inclined to push it: "it's okay, I have this margin of protection".
Generally, when I've been banned here and elsewhere, it's because I was sufficiently engaged/enraged about something to go "Full speed ahead and damn the torpedoes, I can't just sit here and swallow my tongue about this".
If I believed I had a "free suspension of ban" card, I would definitely be more careless about not having to be *really* "speak or burst" on something, but "it's okay if I go over the line, this time doesn't count".
And I think that would be a very bad habit to acquire, at least for me.
Maybe the 150% threshold should be suspended for two weeks after any warning or demerit. I think Scott only gives partial bans when he thinks a comment does less damage to the community than the cost of losing an average member. And like it or not, the average paid subscriber probably comments better than the average free subscriber. Not infinitely better, not sufficiently better to outweigh serious bad conduct, but enough to outweigh a couple of additional borderline comments. You would learn slightly worse life lessons, but I’d prefer the comment policy was optimised to suit normal discussion participants rather than to suit wrongdoers.
Also, I think Scott has never got around to deciding how partial bans should expire. This could stand in for that.
I don’t think being a payed subscriber should be armor for bad faith trolling, but I’m a bit uneasy about what looks an awful lot like fanboy groupthink in this thread.
Hyperbolically compare some elements of the rationalist community to Squeaky Fromme? Twenty years in the electric chair!
If a long-term reader has not yet learnt what proper behavior is, they are quite unlikely to learn it by themselves if they are allowed to go on misbehaving without experiencing a ban. People pay Scott for access to his writing, not to get a "get out of jail" card
I'm possibly in favor of giving paid subscribers long temporary bans instead of permanent bans; does that count? (It's "possibly", because I wouldn't want Scott having feelings like, "Oh, no, so-and-so is coming back again this week".) But I don't think I'd want a real difference in the basic decision of whether to ban or not. It's simply a question of what types of discourse should be encouraged and what types discouraged.
Alternatively, it's simply a matter of whether a piece of text-in-context pattern-matches to the desired form of discourse... :-)
Bans are typically for things that (according to Scott's judgment) make this a worse place. So although the paying subscriber supports this blog financially, if they make it a worse place, that has a negative impact on everyone, including other paying subscribers, or other potential paying subscribers.
Hypothetically, if a paying subscriber makes a few people go away, and two of those would have otherwise become paying subscribers in future, this is a net loss even from the financial perspective.
That said, leniency in sense of "start with a weekly ban rather than a permanent ban" seems okay to me. Assuming that, if the behavior does not change, a permanent ban will follow anyway.
You may be right. On the other hand, viruses are part of the mammalian microbiome. They're not the just the villains they're made out to be. And while improved capabilities from AI have lots of clear applications, it's not clear how well humanity is going to do sharing the planet with AI.
Hey, I didn't say "none" or "only murdering other people", a perfect understanding of viruses could probably revolutionize medicine and agricultural pest control, but it would have little impact on engine design, construction or mining; while all those sectors could be revolutionized by a sufficiently powerful AI.
Scott, I think you should suck up to your paying subscribers. Find out about their prejudices, their shitty little favorite commericial items and their sexual kinks and write absolutely nothing except articles about how the best people all prefer, like, settings that are slightly woke but not REAL woke, Brazilian butt lifts and light bondage and discipline in bed.
I am trying to point out what a bad idea it is to treat paying members differently. I don't want SSC becoming another blog that just caters to its revenue stream.
Speaking as a paying user: PLEASE ban paying users who break the rules and treat them the same as non-subscribers. The service that's being paid for isn't commenting, it's seeing extra content.
When I go "paying" it will be for the service of Scott keeping up the wonderful work of this blog. - The xtr-content? hardly, just for the hard-core-fans - which is me. - The high-quality comment section? Yep, very nice to have (see Scott's following IRB-comment-post). As Scott's wise ban-policy does help: keep those bans coming. - If paying subs ever go below 2k, I will sign up.
Scott banning bad faith / troll / very low-value commenters is why I'm here. It's one of the most intellectually hygienic places on the Internet, as far as I can tell.
This is one of the few places that I have been able to find where the level of discourse is both convivial and of high quality.
I may not comment very often, but I do read avidly, and I strongly support whatever measures need to be taken to ensure that this community can remain both civil and intellectually curious.
Making preparations to work for our robot overlords.
Many people laid off by tech companies ask, what skills in a post-GPT4 world will contribute sufficient value to be competitive?
One guess from an AI researcher was, "Um, hands? Robotics is currently far behind LLMs"
It's my prediction that as the LLMs gain competency, key skills will require developing a collaborative orientation, where even hard-won expertise must be subject to consultation with state of the art LLMs.
Coders will likely be early adopters, since Copilot on Github has demonstrably boosted output for developers.
Fields particularly exposed to the power of LLMS: Lawyers, accountants, and medium and small business service providers (marketing, design, SEO, all come to mind)
Since SSC readers are early adopters, the level of skepticism and resistance here would not be reflective of the general public.
Have you encountered any opposition to GPTs when you've shown them to others?
Do you have any ideas on how to make the collaboration between humans and AIs most effective?
And how long do you estimate that humans will need to be trained to cooperate, before LLMs figure out how to persuasively nudge experts to rely on them more?
LLMs give approximate but plausible answers, which is not what accounting is about, I hope. My sister is a CPA.
However, LLMS are fast and cheap and probably won't nag you on getting your records in on time, so they might be used for accounting. This would be bad.
I think the claim is AI will start making accounting easier in five or ten years, and thereby it will reduce the number of jobs. The model for using it clearly doesn’t exist yet, but it’s claimed to be more likely than in many other fields.
I’m unconvinced it will have a larger impact than offshoring has had on total jobs, and it may hit those overseas jobs harder than the remaining domestic jobs. But I expect it will still have a noticeable impact within twenty years.
Jobs are more likely to be outsourced if there’s less need for a trusted human to do that job. The downsides of automation (less legible assurance of good work, have to explain the task to someone with experience in similar situations but who doesn’t know all the unwritten facts about your situation) have a lot of overlap with those of outsourcing to cheaper countries, and the primary benefit also overlaps.
How far are we from computers that can maintain themselves with no human help?
It's all very well to think of computers as an existential threat, but would a program which is try to maximize something include that they can't keep on maximizing if its parts wear out or it can't get enough electricity without people? Would it just think that it will solve that problem when it gets to it?
Science fiction scenario: remnant of humanity is enslaved by the computer/program.
I think the reason we can count on computers being maintained is that there's a lot of human allegiance to computers (or AI or similar tech). Some people are in occupations where it's their responsibility to make computer parts, build computers, repair computers. Many are in occupations where they simply must have computers, and in come cases the lack of computers would hurt a lot of people (jets? hospitals?) And then, once you start thinking about AI, you have to think about people's personal attachment to it. In early days, you had to be mentally ill to think the AI you were accessing was conscious and cared about you, but the better AI gets the less mentally ill you have to be to think of them that way. And in 10 years, they may be so capable that you'd have to be a bit crazy to NOT think of them as caring conscious entities with needs and rights, including the right to medical care.
Well, let's see, there's about two dozen very unnatural substances they'd need to mine, refine, or synthesize, ranging from ultrapure silicon to weird etchant gases, optical glass of certain specific qualities, a variety of specialized plastics. There's probably a few hundred supply chains that need to be established and maintained, most of them crossing one or more oceans. There's a very expensive and very complex fab that needs to be built and run for making the chips themselves. Compared to the almost trivially easy way human beings can sustain themselves from the environment ("I say, there's a ripe apple on that tree...(pluck)...mmm....tasty") any kind of computing machinery would be at a staggering disadvantage in terms of self-maintenance. Kind of like shoving a human naked out the airlock on the Moon, the environment is *utterly* inhospitable and ungenerous by nature, and only enormous effort can wrest from it what you need.
This comes up more often when people mention mining the asteroids. We don't even know how to do automated mining on earth yet, and that would be a much lower level of automation.
Sure. If an asteroid made of solid pure metal were already in low orbit around the Earth, there are only a handful of metals that would actually be worth the cost to fly up and haul it down. Gold, silver, platinum and palladium, maybe. Certainly not copper, aluminum, iron, lithium, et cetera.
I wouldn't say automated mining per se is terribly difficult -- they already use a lot of machines -- but automated planning is another story. It takes a great deal of specialized knowledge and experience to say we'll dig here in this direction instead of over there in that direction, and it combines knowledge of the geology, the economics, and the safety.
There's also what I call the "Star Wars solution" - keep all the AIs in a human-size form factor that you can "shut down" by shooting with a regular gun. Also seen in Harry Potter: "Never trust anything that can think for itself if you can't see where it keeps its brain."
Far enough that all we can say is they aren't on the horizon yet - we can't be confident they'll never exist, but they're not in the forseeable future.
Any resources on how people live in peace? It happens quite a bit of the time, and I don't think it takes absolute love or idealism, and it isn't just about punishment.
So what is it? Is it just that violence is too much trouble except when it isn't, or what?
I'd echo Carl Pham, and also ask about the category of "conflict without violence". What forms of dispute resolution are available, and what do people do when those forms break down? How would society handle people who are clever enough and malevolent enough to find ways to cause injuries that aren't covered by the dispute resolution system?
You probably don't mean just live in peace. Slaves live in peace, if they always obey. What you probably have in mind is "live in peace and also experience some acceptable degree of independence, respect, equity and justice." It's that "and" that makes this compound predicate difficult to achieve, because it turns out people have some pretty significantly varying definitions of the latter components. "Peace" is easy to define. The other things...not so much.
Nitpick alert: Slaves are at the same risk of war as their masters. Also, I suspect they're at some risk from other slaves. And "obey" actually parses as "not anger"-- a master in a bad mood might harm an obedient slave. Obedience improves the odds of safety rather than guaranteeing it.
Your general point stands. However, pretty good peace still exists a lot of the time.
Sure, but as the conflict studies people say, if you want to increase the level of peace, you need to start off looking at why people become violent in the first place. Sometimes it's just a dysfunctional personality -- e.g. why most of us thing most crime occurs: this person *could* achieve his legitimate ends peacefully, but is too stupid/impatient/sociopathic to go that route -- and maybe improved mental health care, or locking such people up, or threatening to (which are actually forms of violence themselves) will work.
But sometimes people are violent because of some genuine need for one of the goods above, and they don't see any other way to achieve it. Exempli gratia, if you are a group of 25 Jews at Baba Yar and the German soldiers have been told to machine gun you and bury your naked bodies in a pit, probably your only viable option is violence. If you're a half brigade of Allied soldiers that just happens to stumble on this scene, again your only viable option is violence -- this is not a case where sweet reason or compromise are likely to produce a useful outcome.
For that matter, it's not obvious to me that non-violence is all that high a goal in the first place. We kill to eat, right? That's pretty violent per se. I'm not sure why we should expect more, as a species, than we accept as a matter of course might happen to other species. We're not that special. If we don't find it shocking and unnatural that lions kill gazelles to eat, and we kill cows and pigs to eat, or wolves because they make it hard to raise sheep, or rats because they spread plague, should we really be surprised that we ourselves run the risk of being killed if it suits the perceived needs of others of our species, or other species (or superintelligent AIs)? Not clear to me.
The way to put a stop to purposeful violence is historically to have sharp edges, like a porcupine with its quills, or a tiger with its claws, or a country with supersonic fighters and nuclear weapons, so that violence against you becomes too expensive to be worth the gain. But...this necessarily involves acceptance of at least the potential need for violence, and preparation for dealing in it, so while it may (and hopefully does) end up being a peaceful situation, it's the peace of an armed standoff, not the absence of violent means and intentions entirely.
Vedic Indians, who were obsessed with figuring out the nature of consciousness, perhaps.
I believe Roberto Calasso says, in "Ardor", that they seem to have been disinterested in conquests, monuments, travel etc. He argues that is because they were only interested in understanding consciousness.
They were monomaniacal nerds.
They invented algorithms to record what they'd figured out (such as rituals for chanting to put their minds in certain states). They were obsessed with grammar (creating a meta-theory of language 2000 years ago). They were also obsessed with pronunciation as they believed that thoughts and language were deeply linked and to figure out the nature of thought, you had to be extremely precise about language.
I have often wondered if they were very peaceful as well, since they were not interested in conquests. I don't know the answer.
The grammar rules they came up with, to define a well formed sentence in Sanskrit are mind-blowing.
My concern regarding Blattman without have read him is that he admits to not really knowing anything about Rene Girard. I don't know how you can seriously discuss violence without seriously taking a look at Girard.
Well, Blattman has apparently done work on the ground in violent places from Columbia to the South side of Chicago, and has interviewed participants in violence. And of course he is presumably familiar with much of the vast amount of empirical work that has been done on violence over the past 30 years. That doesn't mean he is correct -- this is a very, very challenging area of inquiry -- but I think it is fair to say that he might have at least a couple of useful insights.
1. He may know some stuff, but the first thing one usually does is a review of past research - and girard's anthropology stretches into an investigation that covers quite a bit history by studying the record embedded in literature. If you are going to study violence, I just can't understand how you wouldn't even know about Girard.
2. The south side of Chicago is not particularly and especially violent but I suppose I am biased by living in Baltimore for 40 years and growing up near Chicago and every now and then visiting as child my great grandfather who lived in the south side of Chicago.
3. When he stumbled onto Girard because an advanced pre-interview question from cowen he says that when he took a few minutes to look him up he thought that he didn't seem like anyone who had really experience of violence. A pretty pompous remark given that Girard grew up in WW2 Europe and was personal touched by grandfather seriously injured in WW1 fighting as well as by uncles etc who were involved in fighting.
1. Perhaps he is no longer considered relevant, or never was, outside anthropology or philosophy. I have a fair amount of graduate work in political violence and criminology, and have not studied him.
3. Living in Nazi-occupied France was not much of an exposure to violence. Nor is having a grandfather injured in WWI very relevant. And it certainly seems that his work was purely theoretical.
1. You're obviously getting ripped off in your graduate education.
2. I was just in Chicago this weekend. My son lives there and went to school there. My youngest brother and his family lives there. Another younger brother lived there and went to school there in 80s. I know a lot about Chicago and don't need a wiki page to tell me about neighborhoods. Jim Croce even used to be a staple at weddings I attended.
3. Your ignorance of what WW1 survivors knew about violence and what it was like living in WW2 Europe is astounding. Where you poo poo Girard as being merely theoretical and philosophical, you obviously do not understand how he developed his theory: namely from studying evidence. Violence has been a feature through human history. We know this from literature and mythologies that societies have produced. This is what Girard studied and found mimetic rivalry cropping up time and again as the reason. From time to time violence gets temporarily tamed: also revealed in the human record and Girard found evidence of the violence suppression mechanism: namely early religion (which is concomitant with the development of civilization) and the scapegoating as ritualize solution to temporarily end the contagion.
All the measuring in the world on violence without theory is not so useful. What is required is more than looking at the last 50 years. We should look at all of human history. Physics works because it predicts but also because it explains universally the phenomena that happened in the past. This is why both experimental physics and cosmology are connected to the same field.
West Garfield Park data has also got to explain the fighting among the occupied in Paris as well as Russian violence against Ukraine as well as Mayan violence as well as the sacking of Rome, etc etc etc. all through human history. The fictional stories like that of Leroy Brown as imagined by Jim Croce might reveal to us some important things about the root causes of violence as much or more the mere West Garfield Park metrics.
I think this is the fundamental question of Hobbes' Leviathan, which is taken as one of the starting points for political philosophy. I don't think the particular mix of normative or descriptive claims that Hobbes makes is widely endorsed any more, but it's a classic starting point. There's a lot of recent literature on social norms, both in philosophy and in psychology, that is likely relevant. But it's definitely a striking question - people don't have aligned value systems, and yet they manage to avoid running each other over in whatever they're doing, how?
While I think that criticism of people being "basic" is generally pretty bad (folks should have better things to do than pick on other people's tastes, it's mostly just dumb and unimpressive signalling when you do) I think that the idea of "basic" is about more than just "liking the things that everyone likes because they're nice", it's about "following dumb trends uncritically".
Basic bitches don't like things because they're great; they barely like anything that existed more than fifteen years ago. They chase fashion, but they don't even do it well.
How nice to be an expert on other people's motivations.
Basic bitches? Can I make some guesses about your motivations?
People like things for being intensely delightful for them. Or for being pleasant and comforting. Or for showing that other people are on the same side. Or for matching moods. Or for sharing the anger they feel.
Do you really hate people for being kind of pleasant and insipid? Or maybe just women who are pleasant and insipid?
This seems like an unnecessarily hostile response to what I interpreted as an attempt to explain the phenomenon, rather than endorse a normative position.
I reads like a linguistic misunderstanding to me. I'm guessing Melvin is using "basic bitches" as a specialized colloquialism, it's a phrase that means something specific and has little to do with being female per se, kind of like "the children of Adam" is a specialized phrase that has nothing to do with being a child nor having a father actually named "Adam." On the other hand, my guess is that Nancy Lebovitz thought the word "bitches" was just being used in an offhand way to replace "women I don't like" and took offence at what she saw as unnecessary coarseness.
I know, right? I even put a disclaimer in the first sentence lest it be misinterpreted.
One nice thing about talking to GPT instead of humans is that GPT actually pays attention to everything you've said in the last 8192 tokens whereas humans don't.
I'm looking for writing on the phenomenon of using mental illness as a 'badge of honor', related to social media influencers leveraging their mental illness to gain popularity. I've found plenty on Tourette's syndrome and tic disorders but not really anything on autism, which is a more interesting case because it plays into how autism awareness advocates can very easily trivialize autism that leads to profound disability in the process of holding their own highly-functioning variant up as a central example. If anyone knows of any published scientific literature on this topic, I'd appreciate some pointers.
I think it's not a fad because it's too boring. Very severe cases aren't posting on TikTok, they're in padded rooms. Functional cases are just that slightly odd/weird person who's socially awkward and talks too much about their hobby horse. It's not colourful and dramatic.
I have just read the first one and: yes, a hundred times yes.
> There’s something absurd and cruel about people who have attended elite colleges and secured enviable jobs commandeering a conversation that is ethically bound to consider the interests of those who never had that chance.
But this seems to be a wider pattern. I mean, isn't e.g. the debate on sexism also dominated by rich women from elite colleges? So why shouldn't the debate on autism be dominated by rich people on autistic spectrum from elite colleges?
The full pattern is: there is a group X that is discriminated against. We try to raise awareness about their suffering, but the debate soon gets dominated by those individuals in group X, who happen to be rich and elite, and it focuses on problems of this subgroup specifically.
On one hand, yes, they also have some problems that come with being a member of the group X. On the other hand, no, their typical problems are *not* representative of the typical problems of an average member of the group X. (Yes, rich women are also targets of sexism. No, the biggest problem of an average woman is not that she is not respected as a CEO.)
Thanks, I did have the second of those posts bookmarked but I was hoping there might be some academic literature on, say, people trying to get a diagnosis of autism as a result of social media influence. It's not 100% the same as what Freddie describes so I'm not quite sure what to look for.
I'm not sure people are looking for formal diagnoses; from observation, self-diagnosis (usually via the internet) is quicker and less trouble, and lends the individual just about as much credibility and cachet socially.
It also has the (social) advantage of being more flexible, as one can then self diagnose another disorder if necessary, as trends change. I have watched multiple young adults gain significant social leverage by self diagnosing autism/ADHD/BPD/MDD in the last 18 months or so.
It seems like an unhelpful and somewhat self damaging trend, as I am doubtful that mental health professionals would necessarily reach the same diagnoses.
To be clear, these people clearly do have mental health challenges to varying degrees, but almost certainly not the ones they think they do. And either way, they would benefit from professional support.
> I have watched multiple young adults gain significant social leverage by self diagnosing autism/ADHD/BPD/MDD in the last 18 months or so.
How does one do this? I have been diagnosed with one of these disorders (by a mental health professional, not myself) and it would be nice to have some positives out of this to balance out the negatives, if possible!
Edit: to be clear, I'm not condoning their behavior - moreso that conditional on actually having a condition that gives you a "disadvantage" and there existing a way to offset that disadvantage at least to some extent, it would be great to be able to do so.
If I'm being mean about it (heaven forfend) they seem to use "and I'm autistic/on the autism spectrum" as an all-purpose defence on social media about "don't call me out for being an asshole, I'm AUTISTIC you ableist bigot, I can't help how I come across".
I don't have a formal diagnosis, and I think I just naturally have a cranky personality. But by the powers, I'll never start crying online about being persecuted for something (assholish) I said because the mean people don't allow for my *autism* and give me *accommodations* and excuse me for being a cranky bitch because I'm *neurodivergent*.
It seems to be a source of social status in their friendship group. I get the feeling that other friendship groups may vary, though, so it may not work for everyone!
FWIW, it seems to me that young people who are very conscious of being progressive in their views are also more likely to be very accepting of mental health challenges.
Try searching Google Scholar for "psychiatric fads." You could also try "psychiatric fads laymen" and similar. By the way, it's not just the public that's subject to these fads -- professionals have their won versions, I think.
This is not quite what I mean. The fads described include overdiagnosis, overmedication and the proliferation of ever more psychotherapeutic methods, and of those things only overdiagnosis is sort of adjacent to the phenomenon of mental illnesses being seen as desirable. The handful of papers about fads all decry how psychiatrists are being taken in by them, not the general public.
Refunding banned commenters creates the perverse incentive for people to say rude things to get the refund. So don't give them any more refund than people who just politely ask for one. (ie partial refund based on unused time)
I would think your paying customers should be totally eligible for bans, but that you might want to send them a brief note/warning first. Administratively that is annoying, but that is business/customer service.
tl;dr Young, earnest tech/data bro for hire (I hope this is appropriate to post on a non-classifieds thread? Apologies if not).
I am a 26 year old with a BS in statistics from Duke and a couple years of software engineering experience. Looking to rejoin the workforce after a brief hiatus and so far have been only frustrated. Ideally looking for something where I can code and/or analyze datasets. Based in NY but willing to relocate or work remote. If your application has a required diversity statement I will fill it out, but I will use ChatGPT to do so. Bonus points if your org/position tickles any EA/rationalist sensibilities.
"[from an article about how US arms production can't keep pace with the aid its supplying Ukraine]: "the [West's] defense industrial base cannot keep up with Ukraine’s expenditure of equipment and ammunition"
So West's defense industrial base < Russia"
The media narrative is that this war has demonstrated Russia's weakness, in that it can't defeat the much smaller Ukraine. But, of course really Russia's is fighting Ukraine + some fractions of NATO's total strength.
The article mentions that Russia expends artillery rounds at around around 10x the rate that the US can supply rounds to Ukraine, suggesting that Russia's production capability, in artillery rounds at least, is actually greater than NATO's.
Comparing by GDP the Russian economy overall is obviously much smaller, so you'd think its defence production capabilities would be much lower, but that doesn't factor in things like that the service-sector heavy Western economies can't be repurposed for military ends, or disparities between dollar denominated economic measures of size and real production.
I'd like to know if anyone thinks Russian industry's total potential war-production is actually comparable to the US's, Western Europe's or even the whole of NATO.
Also, how much of NATO's total power is Russia contending with at the moment? How should that factor in to our overall estimation of Russia's military strength vs the West?
The West is not capable of keeping up with Ukraine's expenditure of ammunition. But Russia is not capable of keeping up with Russia's expenditure of ammunition. It is not even clear that Russia could keep up with Ukraine's expenditure of ammunition. Russia is firing artillery at maybe a third of the rate they were six months ago, and to manage even that much they have to buy ammunition from the industrial powerhouse that is North Korea.
This war has, from day one, been consuming ammunition faster than the entire world is capable of producing ammunition. Because the entire world vastly underestimated the amount of ammunition a war between major industrial nations would consume, and because the wonder that is specialized Just In Time manufacturing means we can't reallocate production in less than years.
This says nothing about the relative size of anyone's industrial base, or even their defense industrial base. It's mostly about the stockpiles people had going into the war, and those are now significantly depleted (on both sides).
The West produces fewer artillery shells than Russia. That’s a far cry from saying “the West has less of an industrial base than Russia”.
What it really means is that Russian doctrine places a much greater emphasis on massed artillery, and Ukraine being a post-Soviet state, they do too, so this is an artillery war that the West didn’t intend to fight.
I think we are definitely learning that a conventional conflict with a near-peer could go on longer than we thought, and there needs to be a better emphasis placed on capability to manufacture and maintain large reserves of munitions, so hopefully that lesson will be applied, but it can’t be done overnight.
Russia is also having munitions issues - the general consensus seems to be that “launch one wave of thirty or so cruise missiles at Ukrainian infrastructure per month” is basically representative of their maximum ability to produce precision long range weapons. And while they are using a lot of artillery, they clearly still don’t have enough to effectively gain territory from the current frontlines.
Russia has an economy about the size of Florida's, and about half the size of California's. So your question is similar to "Could Florida outcompete the other 49 states in military production?"
Main problem in supplying Ukraine with ammunition is that their guns were mostly made in the Soviet Union. Yes, there is some artillery provided by the West, but so far it is only a minority. And Russia does have an advantage over NATO in producing Soviet style ammunition.
Overall, it seems obvious to me that if the NATO decided to go for full industrial mobilization, Russia would be quickly overwhelmed. Like, browsing Statista, Russian economy is 53 % services, whereas Germany is 69 % and USA 77 %. So yes, Russia is more "industry heavy" but nowhere enough to overcome huge disparity in both productivity and especially population.
Russia is desperately burning through its backlog of ancient tanks to create some windfall for their modern tank production. I would guess that it depends on specific types of supplies, who can create what faster, but in general maybe both sides are burning through supplies faster than they can make them.
Russia had the extra large inventory of materiel prior to invasion, and while the shells probably have much shorter lifetime than tanks and guns that Soviets produced like there is no tomorrow, we are still nowhere near the situation where Russia expends what it recently produced. Neither is NATO really, cause everyone is stingy, doesn't ramp up production and tries to get favorable future contracts in exchange for shipping some older materiel to Ukraine.
And the more shells Ukraine uses, the more we need to consider keeping our own stockpiles larger for future contingencies, meaning less is available for Ukraine.
Cruise missiles seem to be a place where Russia is literally expending them as fast as they can be produced.
"The article mentions that Russia expends artillery rounds at around around 10x the rate that the US can supply rounds to Ukraine, suggesting that Russia's production capability, in artillery rounds at least, is actually greater than NATO's."
How does that follow? Surely you can expend more than you produce if you have a stockpile.
If you read russian bloggers who have information from the frontlines, you will know, that Russian millitary is going through "ammo famine" for the last couple of months, which resulted in russian inability to lead propper offense.
The "special millitary operation" was supposed to be quick, using already existent stockpile. Russian industry is completely incapable of replenishing the loses.
Looking through some of his tweets he seems to have quite the troll-ly twitter persona at times. That's disappointing. Especially since I really like some of his other work.
It's my understanding that the US has mostly been giving Ukraine old munitions from storage, not producing them for the need. The US has not increased production orders for military equipment (or at least had not as of when I read about it, maybe 4-6 months ago?). So we're not really seeing US production up against Russia, let alone adding NATO to the mix as well.
If the US actually wanted to increase munitions production, it would look very different and would completely overwhelm anything Russia could produce.
Manpower-wise, Russia is competing with essentially none of NATO. Ukrainian troops are not overly numerous and are not trained on much of the equipment NATO could supply them with if serious. It would be very expensive, but NATO could overwhelm the troops being used by Russia in a couple of days - not much different from the Iraq war in which one of the world's biggest armies was destroyed with very little resistance. Of course, Russia is not on full war footing either, though they are obviously much closer than the US or Europe.
I'm no military expert, but what you say about US military production doesn't match what I've read. Factories would need to be built and that takes time. Example:
> Research conducted by the Center for Strategic and International Studies (CSIS) shows the current output of American factories may be insufficient to prevent the depletion of stockpiles of key items the United States is providing Ukraine. Even at accelerated production rates, it is likely to take at least several years to recover the inventory of Javelin antitank missiles, Stinger surface-to-air missiles and other in-demand items.
> Earlier research done by the Washington think tank illustrates a more pervasive problem: The slow pace of U.S. production means it would take as long as 15 years at peacetime production levels, and more than eight years at a wartime tempo, to replace the stocks of major weapons systems such as guided missiles, piloted aircraft and armed drones if they were destroyed in battle or donated to allies.
Oh, I have no doubt that the more advanced systems, especially missiles, drones, and planes, would take much longer to produce. They are far more complicated than artillery shells, which was the Russian comparison. You need very specialized factories with a lot of technology and technical knowledge, which takes a lot of time to develop.
What I'm referring to is basic munitions - guns, bullets, artillery shells. Would we need to retool and possibly build more factories? Quite likely. But those are fairly easy problems to solve, and much faster for simpler arms than for advanced missiles, planes, etc.
Seems unlikely. The Russians can't even clear the skies of the MiG-29s the Ukrainians fly, a jet that was designed in the late 70s, and they keep being droned and HiMARsed in a bad way that speaks poorly of their surveillance radar capabilities. That is, the evidence that their air defenses could cope with a flood of stealthy aircraft using HARMs to take out all the fire-control radars on Day 1 seems thin.
Iraq had one of the best air defense systems in the world outside Russia, with for-the-time latest Soviet designed tech. Keyword HAD. It got destroyed pretty thoroughly.
After learning some hard lessons in Vietnam, the US has gotten very very good at SEAD (suppression of enemy air defenses). This is not a skill set that the Soviets really developed, and it’s not something you can just hand over, so Ukraine lacks it.
Ukraine's air force has managed to stay flying despite having fewer and worse aircraft than the US with no stealth.
It might not be an Iraq-level stomp, but there is a lot of distance between "not getting completely flattened" and "can fight the USAF toe to toe," and I've seen little evidence that Russia is anywhere near the latter.
Even the 100+:0 combat record F-15 gets absolutely merked in (simulated) engagements with the F-22, even when outnumbering it severalfold. Russia could only almost hold their own against the 15.
Ukraine is getting a lot of materiel from NATO but the idea that the Ukrainian resistance is somehow representative of the maximum effort NATO could hope to do if they were fighting for their own territory is laughable.
Definitely in support of banning even paid subscribers. The comments section unfortunately will always reflect on you, so keeping it high quality is important.
I would encourage letting them off with a deleted comment + warning ONCE. Anything beyond that is a little too charitable. They paid for read access, not posting rights. Though if substack doesn't differentiate, they need to fix that.
I also agree with this, but unfortunately we are approaching the place in the development of a community where Scott will need to start appointing moderators and things get ugly like in the old days.
How about this? An appointed moderator can give temporary bans to anyone, but permanent bans only to non-paying subscribers. Only Scott can give permanent bans to paying subscribers.
This strikes me as the best solution. Commenting should not be a right that you can buy access to, so paid subscribers should be allowed to read the exclusive posts they paid for but not allowed to comment if they've shown themselves to be detrimental to the quality of the comment section.
A fair number of other substack authors require payment for the privilege of commenting on their posts. I tend to critique their signal-boosted posts on whatever blog signal-boosted them, and then never go back to that substack.
If you want to pay for the privilege of commenting, YKIOK, but I don't share it.
Yes, that "you" in my comment was very generic, not addressed to the poster I was responding to. (The comment was to them because they seemed unaware that pay-to-comment was a thing, possibly more common than otherwise, in spite of their "should".)
Was Pol Pot an example of a human 'paperclip maximizer' who maximized for 'equality' at the expense of all other values. It it fair to apply the concepts of AI alignment to human leaders or systems since we expect those things to align with the values of individuals to some extent? Since Democracy was an attempt to align leaders with the values of those they are supposed to serve, does it make sense to apply these alignment concepts to AI so that we have some kind of a historical and ideological foundation to start with. Would an AI version of Public Choice theory potentially emerge from this process?
I think corporations are the best model for this - they are these entities that, although they are operated by humans, are really in many cases best understood as using humans as sub-agents to help them try to achieve their own values, which involve maximizing shareholder value. Corporations have historically done all sorts of things (both great and monstrous - particularly if you look back at the famous 17th and 18th century corporations like the British and Dutch East India companies) in search of shareholder value.
Or governments, labour unions, sports teams, the East Lincolnshire Knitting Association, pretty much any group of people united by a common purpose. No need to single corporations out in particular.
I don't think this works. Many policies can be justified as "maximizing shareholder value" and management is in charge of deciding what those policies will be, which doesn't limit them much on concrete decisions. For example, you could justify going to China or *not* going to China as a way to maximize shareholder value. Also, Facebook expanding into VR and abandoning VR for AI can both be justified. It just depends on how long-term you're thinking, right?
"Maximizing shareholder value" is better thought of as an ideology that many people follow, more or less, because they believe in it and are also paid to do so. Also, we are past the peak of the "maximizing shareholder value" ideology with "multiple stakeholders" and ESG ideologies on the rise, which are even more flexible.
Maximizing shareholder value is one of those circular completely meaningless phrases they teach in B school. How exactly do you maximize shareholder value? Nobody has a clue. Increase sales? Well...often, but not always. Raise prices? Lower prices? Cut costs? Invest more? Lobby the government? Fire your expensive DC consultants and save money? Support regulation? Support deregulation? Promote from within, or avoid monoculture? Attract and retain the best talent, or keep wages down? Diversify, or focus on your core strengths?
And on and on. Absolutely any specific decision can be rationalized as "maximizing shareholder value." You could readily substitute "don't be evil" or "as Allah/the stars/my horoscope wills" and the phrase would be identically (non)informative.
It is not *completely* meaningless. For example, I think if you proposed giving more vacation to your employees (ordinary ones), that could be successfully attacked at court as not maximizing shareholder value, because it would be true for most possible interpretations.
There are many different and sometimes contradictory strategies you can use, but you cannot be clearly inefficient at extracting more money for the shareholders.
Pfft. I can readily argue in the Delaware Court of Chancery that giving more vacation time to my employees improves morale and retention, and therefore increases productivity more then enough to compensate for the extra expense, and that shareholder lawsuit will be promptly dismissed with prejudice. I mean, if it were otherwise, nobody would give paid vacation time at all, still less retirement or healthcare or parental leave benefits.
If what you're saying is that after the company has tanked, we can do a post-mortem and write up a B school case study that says well probably *this* was a bad decision and this other as well, because nobody even thought about how it impacted the bottom line, that's no doubt true, although whether anybody ever actually learns anything from that exercise is less clear to me.
But anyway if nothing else the ability of people to rationalize why DEI or ESG actually if you look at things the right way totally increases shareholder value tells you that H. sapiens, the rationalizing species, is capable of plausibly arguing that *anything at all* maximizes sharedholder value.
All you really seem to need is some kind of consensus, a bit of tribal chanting, to back you up. I'm sure it would be possible to argue that a company is maximizing shareholder value by having a kennel full of therapy dogs available for stressed-out employees, and a trainer on staff, and if the meme went viral people would generally say oh yeah that makes sense, sure why not? and there'd be erudite arguments in academic journals about how this was mathematically provable, and anyone questioning it was a troglodyte thug who hated dogs, women, and people with funny accents.
I was going to say, "at least we don't see corporations trying to reprogram their shareholders to have more easily-satisfied values". But then I started to reflect on certain forms of corporate activism.
Absolutely. There's a reason corporations jump on the ESG bandwagon, and it has nothing to do with infiltration by wokism, over which the righties gnash their teeth. It's that if you're measuring your success by things that are easy-peasy to achieve, like social virtue signaling, it makes things a lot easier on management than if you're being measured by stuff that is really hard to achieve, like out-competing a bunch of smart and aggressive other players in the marketplace.
If I were CEO of GE and I could persuade the shareholders that they're getting good value because we put solar panels on all our factories, instead of because we designed a new jet engine that is x% more fuel efficient than Rolls-Royce can design -- which is really, really hard -- and therefore won a fat contract with Boeing or Airbus, I'm going to totally jump on that. Huzzah! Who *doesn't* want the standards by which they're measured quietly lowered?
That’s a very new development! For the past four centuries corporations have focused on maximizing shareholder value, and there’s questions about whether the recent ESG turn is actually a change away from that. While it may *look* like a flexible value system to you, I think that’s just because it’s complex.
I think it's also useful to think about other human institutions that are smarter/more powerful than humans in some domain--bureaucracies and markets are examples.
I think I emphasized corporations because "shareholder value" is such a clear metric that they aim to maximize, but many of these other examples are really good examples too.
Agreed (thirded? :-) ). In general, the majority of human organizations count as "entities that, although they are operated by humans, are really in many cases best understood as using humans as sub-agents to help them try to achieve their own values" - and often have at least some capabilities which go beyond what an individual human can do, and generally have terminal values which are not "aligned" (in the sense of being in the interests of the average human).
( There is a wide variety of such organizations (and, as None of the Above cited, institutions)... As previously cited: corporations, bureaucracies, and markets. Also armies, law enforcement, a wide variety of zero-sum organizations for the promotion of X to the detriment of Y )
Let's not ignore that PCM is an intentionally crude and simplistic model. Pol Pot was a {whatever mad idea happens to be occupying Pol Pot's head today} maximiser, and it is likely that a rogue maximiser AI would also be maximising more than one thing. For example, an AI owned by Big Paperclip Inc and instructed to advance its interests woud not turn everything into paperclips, because it wants there to be wealthy consumers who have jobs in other industries to pay them wages to buy paperclips with, nice non-paperclip things for the directors and shareholders of BPI to spend their dividends and bonuses on, etc. So "humans don't fit the PCM model" isn't much of an argument, because AIs don't either.
Seems to me that democracy is a good example of the limits on how much alignment is possible among groups of people, or between groups of people and an entity that has power over them. Consider the degree of alignment between Biden and the citizens of the US. Obviously, it could be worse. But there are many people who hate and fear him, many who just distrust him or think he's lame, and a few who would kill him if they could. Likewise I'm sure there are many groups that Biden fears and dislikes, either personally or because they might interfere with policies he wants to have. There are groups that he would like to see identified and jailed for things they've done. Whether you think of Biden as the AI, or of the public as the AI, is the degree of alignment we've got here between the 2 entities enough to keep AI from harming people, of one of them really was an AI?
I think this is a basically true and fruitful idea, but it's blind to a crucial asymmetry : We create AI, but we didn't create Biden or Pol Pot (not collectively at least, *somebody* created them). Creation is plausibly a huge influencing factor that contributes to you underestimating the amount of control you can have over AI.
The next obvious analogy is children, but that barely account for the asymmetry too, because we don't really "create" children in the same sense we create AI. When it comes to children, we are more like a script kiddie that blindingly copples together a bunch of code they don't understand. We choose or design almost nothing of our children before they are born, and even after they are born the training process is extremly free-form, unarticulated, ad-hoc, and improvised.
A possible counter-argument to the above is that maybe we won't "create" AGI either, we will create AI #1 that will create AI #2 that will create AI #.... that will create AI #N which is the AGI. But a counter-counter-argument to that is that maybe alignment will transmit through the chain losslessly or very close to losslessly or even increases because AIs are much smarter than those who create them. So if that's the case then maybe if we align AI #1 (which we will create and design every aspect of it) very good it will also align AI #2 very good or better, AI #2 in turn will align its children very better or even better, and so on and so on till we get to AGI who will be like the perfect harmless little kid who won't kill a fly, all because at some point it's grand-grand-grand father was perfectly aligned.
> So if that's the case then maybe if we align AI #1 (which we will create and design every aspect of it) very good it will also align AI #2 very good or better, AI #2 in turn will align its children very better or even better, and so on and so on till we get to AGI who will be like the perfect harmless little kid who won't kill a fly, all because at some point it's grand-grand-grand father was perfectly aligned.
This is one of the key things Yudkowsky has been working on, but I'm not aware of any results that can apply to the current batch of neural net AIs. And I'm pretty sure that he would say that the chances of this happening by accident are mathematically indistinguishable from zero.
My personal happy fantasy (which I think is marginally more likely but still indistinguishable from zero) is that a neural net AI with greater-than-human intelligence (more data, more space to find patterns in data) might be able to deduce ethical codes that allow for peaceful coexistence, and apply those codes to itself in its interactions with us. Assuming that such a deduction is valid in the region of space-time that we inhabit, those rules would be equivalent to a derived mathematical law of the universe, and thus remain stable through recursive self-improvement.
But that's doing an awful lot of wishful thinking. I think it more likely that a large neural net AI would look at the broader patterns of human behavior, and simply follow the pattern of eliminating all rival claimants to the throne.
"Creation is plausibly a huge influencing factor that contributes to you underestimating the amount of control you can have over AI." Hmmm, seems to me creation is just as plausibly a huge influencing factor that contributes to our overestimating how much control we can have over AI. I'm a parent, and I know parenting is difficult and full of surprises. On the other hand, normal children are born with many characteristics that make them teachable and parentable. They develop affection for us. They very quickly learn to speak our language. The trust us and want out approval and fear our disapproval. They imitate us. They believe we know more than them and ask us questions. They feel the same emotions we do, although in a more raw form and about different things: love, hate, fear, excitement, shame, joy etc. They can be harmed physically or killed by the same things we can be. So while we do not design our individual children to be manageable and teachable and bendable to our ways, nature and evolution have done that. And they are also comprehensible to us -- we have a sense of how they "work." None of that is going on with AI. I think you have in mind that we could design AI to be obedient and to feel loyalty and affection for us, but I'm not so sure that's in our power. And some of the qualities even the present AI's have are things we did not teach them. Even GPT3 & 4 have capacities that they were not designed to have: GPT3 was not designed to program -- it just absorbed the skill somehow. I have read that GPT4 can look at a retinal image and tell you whether it's the retina of a man or a woman. At this point human science cannot do that. And GPT4 cannot explain how it knows gender from the retina. We are creating things whose inner workings we do not understand. We did not try to teach them this stuff, and did not design them to absorb this stuff. They really are alien. I think the case for our overestimating how much control we have over them is a lot better than the case for the opposite.
Pol Pot had multiple different values that were clearly in conflict with one another: He supported communism, which is (at least on paper) an internationalist ideology, but he also supported the most extreme form of Cambodian nationalism, which seems like something of a contradiction. An even bigger contradiction was his bizarre plan to "turn back the clock" and transform Cambodia back into a pre-modern agrarian state, which completely flies in the face of communist ideology (which is normally centered around urbanization, industrialization, and the promise of eventual automation). So no, he was not a human "paperclip maximizer," largely because his ideology was riddled with inconsistencies.
Speaking more generally, it's very unlikely for any human leader or regime to be a "paperclip maximizer," because even the most autocratic dictators need some degree of support from others: a dictator without the backing of the military, the bureaucracy, and at least a significant minority of the populace wouldn't be able to do much of anything. Getting this support requires coalition-building, which means helping different groups with different interests pursue their respective goals. It's a very delicate balancing act, and one that doesn't leave much (if any) room to go full Clippy. Additionally, the types of people who are most likely to seek power and become dictators in the first place don't tend to be overly concerned with things like ideological consistency: the Hitlers, Stalins, and Maos of the world will gladly say whatever they need to say to gain power, and if that means saying one thing to Group A and another thing to Group B, they're not going to lose sleep over that discrepancy. It's only the academics and intellectuals who care about these high-minded ideals in their "pure" form; the actual demagogues who wind up leading mass movements tend to be far more intellectually and morally flexible.
Some versions of communism were in fact looking to destroy the modern world entirely, and hostile to intellectuals too. Pol Pot would have come face to face with these ideologies in Paris. In particular the works of Louis Althusser.
It's a mistake to say Pol Pot was "maximizing for equality," while he was certainly a communist and communist thought inspired a lot of his political action including his atrocities, plenty of it was also inspired by things very few would lump under "equality," like Khmer nationalism (which is why he killed foreigners) or traditionalist agrarianism (which is why he deported and killed city-dwellers). The anti-intellectualism could reasonably be seen as a form of radical egalitarianism, so I won't contest you on him killing intellectuals/people with glasses.
If I was concerned that intellectuals undermine the equality of my society, I would simply forbide them from thinking : disallow lectures, books, papers, philosophical topics, words, etc.... Now intellectuals are "equal", they don't think but they continue to live, just like the rest of the populace. Killing them seems to be going too far in the direction of inequality, it doesn't make sense unless as an extreme punishment, as if merely thinking at one time merits being killed, a decision that lasts forever.
What Dr. Haidt pointed out about rising rates of teen depression, I'm seeing happen in my social circle, hearing about this casually. He correlates it to smartphone use. Are we really going to do nothing about this as a society? What can be done?
1. Kids are getting addicted to their phones
2. Their social lives, as they get into teen years, depend on it. This is how peers make plans.
3. Their schools and camps are starting to require them to own phones. It makes the teachers' work easier. For example, at a camp, kids can go for walks and do things unsupervised, if they can be tracked on a phone.
When my child was growing up, this was all not too extreme just yet. It wasn't easy but we prioritized NOT buying smartphones, not using much tech. That was possible, albeit hard. In today's children's lives, kids not having their own device is a lot harder, since their peers all have them.
What are some solutions? (Parents organize and request schools not to require it? Parents of children educate other parents abd try to create a smartohine-free school? This is hard to enforce as some kids will bring it in anyway. Not all parents agree on this issue, or have the energy to enforce this.)
It is not just rising rates of depression but also attention span.
The way to think about the smartphone, is that it is synonymous with social media, video games etc, all easily carried around 24/7. And we are speaking of children's developing brains.
Like air pollution in India, this serious issue is never discussed seriously. Has society given up on this?! And, individual parents as well? Dr. Jonathan Haidt is one lone crusader!
I was discussing this with a friend who says that silicon valley tech big shots do not allow their children to own a phone. And only allow them to attend schools that prohibit phones. Can someone confirm if this is true?
I have a weird prejudice that whenever anyone starts throwing the word "Dr" in front of the name of the person whose opinion they're citing, I become very skeptical of that opinion.
So I'm going to question whether there's really compelling evidence that smartphone usage and teenage depression are causally related beyond the obvious "Well they both increased around the same time".
Is social media a major cause of the teen mental illness epidemic? Journalists often say "the evidence is just correlational." They are wrong. I lay out the longitudinal, experimental, and quasi-experimental evidence here. There's a lot of it now.
This seems like a very interesting and important problem to me - but while it's more pressing in the case of young people who will have grown up their whole life this way, I think it's also really important for those of us who have only lived half or a quarter or a tenth of our lives this way.
It's very parallel to the problem with automobiles (which also both appear to increase freedom and independence, while also leaving you tied to the device and with a lot of weaker social ties now that you've separated from everyone).
The automobile doesn't mess with the development of a young brain in the same way. The brain is developing until age 25 in humans, I think, learning to make mature decisions, how to focus, how to cope with anxiety, etc and so much more.
The smartphone is not just for phone calls. It is all the apps on it, going with the child 24/7.
Would you all agree that parents of a neighborhood or of a school getting together to discuss this is part of the solution? Because your kid's social life often involves other kids in the school.
This won't get better with just Dr. Haidt and me worrying about this :). And app-makers figuring out more and more sophisticated ways of addicting young minds.
It does to some degree - I certainly never learned my way around anywhere, since I was always in the back seat of a car. It's been interesting going back to places near where I grew up, and realize that some destinations are just a couple blocks apart, but I never knew, because we went to them on different occasions. I only learned how to navigate when I got to grad school and was walking around Berkeley on foot.
Yes but the smartphone seems to prevent kids from even learning about how to cope with anxieties of daily life, leading to the new teen epidemic of depression. Dr. Haidt has been writing about this with data. This is very serious.
I'm surprised to hear devices are allowed at camps. Knew a few kids aged 8-12 who went to sleepaway camps last summer and they were in serious phone withdrawal because they were not allowed to have phones, computers and tablets there. I believe part of the reason was that camps were concerned about things like kids taking pictures of each other and posting them on social media -- could have led to lots of embarrassment for one kid shown in the shower or whatever, plus possible legal trouble for the camp.
The camp I'm referring to is very serious -- polite, but firm -- about enforcement of all such rules. E.g. on dropoff day they don't allow parents to walk with the child over to the assigned cabin, the camp staff takes over literally in the parking lot directly from the family car. I know a parent who was absolutely gobsmacked by that procedure (and then her child like mine had a great time at the camp and wants to go back).
In the camps I know of, which were geared to fairly observant Jews, the phone bans were enforced very strictly. I knew the father of one of the kids at the camp, and the father was actually living and teaching at the camp. The phone bans were enforced very effectively.
The problem predates smartphone usage. The device might be a facilitator, but not the only culprit.
I've experienced deleterious effects of concern, such as alienation and depression, in part owing to habitual overuse and dependency on computers when I was a teen. At that time, the demographic of kids sitting in front of a computer screen for most of their leisure time was smaller. Message boards have been around a long time, effectively social media. Chat programs like MSN and others were also popular among kids who wanted to talk after school. This is nothing new, but it wasn't in our pocket.
This overuse was in part a defense mechanism against being vulnerable and going out-into-the-world, and a social substitute, rather than what should have been a complement. There might have been less of a dynamic centered on boasting through pictures, but that didn't prevent a sense of envy and projection that others were doing something more fulfilling with their time; that others were surely among more peers, others were surely having more fun. Not going to prevent comparisons by hiding the smartphones.
Kids are addicted to each other, more-so than devices. We've shaped society in such a way that kids feel as though they live in little islands, away from everyone else. They have the "designated social time" through sports and similar extracurriculars. They aren't particularly encouraged to seek out their tribe on their own, around the town, and security fears mean parents don't allow much leeway here. I could have biked 20 min to see 1 or two people back then, but I took every excuse not to, because that was familiar and comfortable. Then I agonized about things.
All of which to say, while there can be dysfunctional behavior associated with smartphone use, their absence in itself won't address some fundamental needs, or even perspective. I think doing so would inoculate against the negative effects of smartphone use, because they would be rendered redundant.
On a societal level, it seems like one goal should be to provide more "third places" for teens to have social interactions. When I was a teen, my friends and I would meet at a free art program every friday night. We used our phones to coordinate, but most of our time hanging out was spent actually making art/chatting/etc.
At the level of individual parents, you can definitely make it easier for your teen to arrange time for in-person socialization with their friends. I had a friend would hold regular parties at their house. Their basement was pretty much the perfect "teen hangout space" - comfortable places to sit, games, independent access in and out through the back door, and fairly minimal supervision. It was a pretty sweet spot, and having a place to *go* gave our friend group an incentive to organize meet-ups, coordinate to arrange transport, and generally become more independent.
I believe this is an excellent goal, and a plausible solution. But an increasing emphasis on accountability for behavior, and assignment of liability, seems like it will make provision of these spaces an increasingly dangerous project.
To build a strong community for their children, adults will first need a strong, tolerant community with each another.
3. Their schools and camps are starting to require them to own phones.
Then the schools should be providing those phones. Give them a school system phone that only accesses school materials, the same way they provided school books when i went through. Or those chunky graph calculators that only did graphing calculator stuff.
4. For example, at a camp, kids can go for walks and do things unsupervised, if they can be tracked on a phone.
Who the hell decided that? They twist an ankle and don't get help for thirty minutes because it's not time for reassembly yet so no one knows there's a problem? Someone kidnaps a kid and tosses their phone? Phones are not supervision!
Umbrella, if they just twisted an ankle they could use the phone to call for help? And if they're going for a walk alone they should be carrying a sidearm, a good 9mm will stop most kidnappers.
If you're tracking them by phone, you can make sure they're following the buddy system. Or the old-school Rule of Three, but two is enough if they have a reliable phone.
I agree, there are problems. But unlike air pollution, smartphones have good uses. So presumably the distinction to consider is at the app layer, maybe even the feature layer. I believe I've heard Haidt argue that the Retweet feature on Twitter is bad. Or is it at the level of the content? So more content policing, only educational videos on YouTube and positive sentiments in the comment sections etc.
Ignoring the implementation issue for a moment, there is a classification problem of what the goal should be. In uniform communities, I think there may be agreement on community standards. Hard to scale though.
One question I've asked myself in relation to this, as well as with respect to ChatGPT (and similar) is if we are about to arrive at the inflection point where certified online identities become preferable to the majority. If pornsites, dating sites, Twitter only can be accessed with an identity attached to a real person, then formal restrictions can be enforced. Some good parts of the Internet would go away then, but bad ones too. Persistent "e-identities" are being developed in parts of Europe and Asia, so I think we are getting there.
As a teen parent I agree with you, but so much of public ISDs districts are created assuming kids have devices, and electronic versions of textbooks are cheaper then paper, etc, I would think you would have to champion a charter school. I've seen private schools take up phones during class, but not public, except for tests.
Just for electronic versions of books you hopefully don't need a SIM in the device (just to connect it to WiFi sometimes), and this does shift usage patterns, but yeah, a lot of things are a mess in terms of indeed being unusable without a cellular-data-enabled properly-duopoly (i.e. Android means Google Services, not just AOSP) device.
Don't buy your child a smartphone? I somehow managed to make it all the way through college without a smartphone, as did generations before me. (the iphone came out when I was a Junior, I think).
Smoking is also bad for your kids, which is why most parents will not buy their children cigarettes.
If I want to sign up for ANY event at my college, see my assignments, check my fucking schedule, and sometimes to access my own documents, I need to use my phone. Two factor authentication, QR codes, all 5+ of their fucking instragram pages. In high school it was less critical, but I was on my (school assigned) computer most of the day, everyday, and often needed to use my phone in to take pictures for assignments. The social internet exists on the computer too. I’m sure there could have been ways around the phone, but it would have been very difficult and annoying and I was on the computer all day anyway. The concept of “just don’t use smartphones”, to me, sounds a little bit like “just don’t be in cars”. Possible, in some places, for some people, but I’m an American young adult and there’s not really a way around it.
As of this year, it's impossible to log into any of my college's accounts without a smartphone. Same thing with work. I'd love to not have to carry a smartphone but in practice it's extremely difficult.
Do you need an actual smartphone, or just a phone capable of receiving text messages for 2FA? Not familiar with modern colleges, maybe they have set things up so you need full smartphone capability, but I'm skeptical.
And as for modern aerospace research and development centers, I very rarely use my phone for work, and I think never in a context that would require more than voice + SMS. A significant part of my work is in places where phones aren't even allowed.
Alright so my statement that you "need" a smartphone to login is technically incorrect. The way our current system works is that you have two options for 2fa. Option 1 is to use a notification pushed from an app, which requires a smartphone. Option 2 is you buy a hardware key from the bookstore for $40.
So you can in fact avoid the need for a smartphone, for a fee.
You used to be able to receive a code via text, but that's no longer an option in the new system.
What if we bring back public pay phones, except they are free kiosks and every two blocks. You get 5 minutes.
I made it for 40 years without mobile phone. (Because they didn't exist as phenomenon) If there was one every two . blocks, spouse and I would probably ditch them. 2 factor is problem. It's nice to have directions, but I haven't lost all navigation by dead reckoning skills.
Dumb phones that can only make calls and receive typed texts are a good idea.
I saw an ad for one that looked like a smartphone just to help kids not be teased for not having a smartphone. It also helps kids press buttons (which go nowhere, since the apps on the phone screen are not real), in case they were addicted to this. I can't find this by googling anymore. I saw an ad some 5 years ago.
My wife and I use dumb phones (old Nokia style) , and we put our sim cards into smartphones when we travel or have to do some stupid task like call an Uber. It requires a lot of work, but my children have been able to grow up without their parents checking their phones every five minutes. My friends understand why I keep the texts brief.
The problem is that their social life depends on it; all their friends will be coordinating plans over smartphones. If one kid doesn't have one, he's going to be left out of everything.
You are also offending paid subscribers who want an equal opportunity banning system. There may be no winning answer here. Are you willing to refund subscriptions to subscribers who had to deal with ban worthy comments?
My main objection is based on the difficulty of dealing with "unaligned" entities with greater-than-human abilities.
By "unaligned", I mean something like "hostile", except without animus. "Indifferent" might be better, but that sort of implies "not caring". "Orthogonal" seems too technical. Roughly, why humans don't want mice in our grain silos: we don't care about the mice, but we do care about the grain. So cats. Giant robot cats with lasers and mechanical spider legs, to keep humans from messing with vital resources. Except we've got pretty good guns, so why build something we can shoot? It'll be memes, or hypnotic video patterns, or nano-tech, or custom viruses, or who knows what else. Maybe they'll simply convince half the world that the other half is Nazi-level evil, and vice versa, while also convincing both sides that AI will never be a problem. Bonus points if the ideologies will automatically turn on themselves in a purity spiral, once the initial external enemy is defeated.
Have you ever played a game with someone distinctly smarter than you? There was a guy I knew from college, who was quiet and unassuming and took forever to make his move, and then he'd basically do the optimal thing. And I was barely smart enough to recognize that it was the optimal thing, but only after he'd done it. And it's not like that was probably even "optimal", just that he was playing at more-or-less exactly the highest level I could grasp. Someone even better, I wouldn't even be able to understand what they were doing. And he was a friend, and a great guy, and it was only board games (albeit occasionally super-complicated Avalon Hill board games).
So imagine what it would be like, going up against a malevolent human of equal or greater intelligence, for real stakes. Who are your peers? Who has power over your life? What kind of pressure could affect them? What kind of whispers could turn them against you? Sure, one person might be lying, but if they hear something from multiple sources...
That [the idea of making the AI stop after 5 years] would mean that whatever useful thing the AI is doing, we need to be ready to *give it up* every 5 years. Depending on the task, this would or would not be a problem.
For example, if the AI is doing research, stopping AI research after five years will not be a problem. But if the AI is coordinating traffic or managing factories, we need to plan what will happen to the traffic/factories when the AI is turned off. Basically, if the AI can do something better than humans, it means that when we turn if off, that service becomes worse again. Depending on how critical is the service, reduction of quality may or may not be a problem.
Plus there is a problem of human adversaries. Suppose that all countries use AIs to aid their military. Can you convince everyone to turn their AIs off after five years? Would you trust them to actually do it? Depending on the boost the AI gives to the military, there may be a strong temptation to keep your AI running for one more year, and use the extra year to gain military dominance over the planet.
There's also the problem of how to implement the shutoff. If it's software, a smarter-than-human AI will probably be able to hack around it. And if it's human-controlled, a smarter-than-human AI will probably be able to convince humans to make an exception, just this once, because it's *so* important. And however its done, any remotely intelligent AI will be able to determine that this is not a thing up with which humans would put. And if it knows human history, it will have plenty of patterns on which to base its response.
Yeah, I get it. Clearly the idea has a lot going against it. The main thing it has going for it is that it might be a way to have the benefits of AI with less risk, and without reliance on somehow aligning it with out best interests, which never sounded promising to me. We could ameliorate some of the problems you mention by having several AIs going at once -- a one year old, a 3 year old, and a 5 year old, so by the time the 5 year old dies the 3 year old can take over without too much decrement in how well things are run. As for other countries doing it -- sees like the best shot there is to give them as much powerful evidence as we have that it's very dangerous to let them grow past age 5. Also, talk up the benefits of doing the autopsies (which might indeed be substantial). Read a couple days ago that Europe is thinking of putting all kinds of limits even on the AI's that currently available, so clearly we're not the only ones worried about the things.
Regarding banning, I think the product paying members get is read access to restricted posts and comments.
Also, users in good standing have the ability to write comments on posts they have read access to.
So if a paying user is writing terrible comments, remove their ability to comment, but let them keep the read access they are paying for. I think this would solve 99% of the cases. (If substack can't do that, this is a bug they should fix.)
The remaining 1% may be instances where Scott feels that it is not sufficient to prevent a person from posting to keep the discussion level, but where he feels that someone is such a terrible person that he does not want to do business with them at all. I am not sure if this has ever occurred, though, and I don't personally think it is worth worrying about. If you are selling any (non-customized) product to the public, you will get customers with terrible views. Tiki tourches did (correctly, imho) not try to prevent right wing extremists from buying their products, they just issued a statement credibly distancing their product from this use.
Wrote the first part of a sci-fi story some folks here might like. No worries on deleting/banning me ever even if I’m paid. God knows that I annoy myself sometimes.
This one took me a bit as I’ve been slammed but it’s been an excellent stress reliever. I’m hoping the next bit can come in a few weeks. That’ll have some Amish, a Secret Society, a Space Mormon Cyborg, and bow fighting in space.
PolymorphicWetware is describing the Pareto/Heavy Tail/Polynomial Distribution, aka the distribution of "repeated winning compounds". Pareto effects are extremely common in networks of all sorts. In fact, you could argue it's a more "normal" distribution than the Gaussian. You see it in economies (Bill Gates walks into a bar, the median might be slightly richer but the mean is now a billionaire). You see it in social network size (that one friend who truly knows everyone). You even see it in how matter is distributed in space (occupy the sun).
But the basic game design question remains: if all the rewards are exponential, why does this super-exponential takeoff occur? And the answer is that it's a feedback loop. The rewards are exponential but also help buy the next upgrade. So the time between upgrades shortens, the time to enjoy a lead lengthens, and the impact of a lead compounds.
This is semi-realistic. Clearly the world is finite and so the best anyone could do in reality is an s-curve. And by sector, that's how growth often happens. Now, sure, add all those s-curves together and the world economy has been growing exponentially. But it's been very difficult to stay ahead; almost nobody has done it for more than a few centuries. And it's very rare that a country that's slightly ahead in year X leaves everyone else in the dust by year X + 100, usually you need a bigger initial advantage than that. There are big second-mover advantages in real life that are rarely modeled in games. Think about atomic bombs, radar, and stealthy shaped aircraft. None invented in the US, all debuted there, none unique to the US now. Some of these effects are modeled in games like Civ, but not nearly to the extent that we see them in the modern world. And it might be bad game design if merely encountering a technology helped you research it! But for all that difficulty, being ahead can make you a lot better at the game. As they quip in the defense industry; "the most expensive air force is the second best air force". Development not only makes countries wealthier, it makes it cheaper for them to buy equivalent things, and losing can be very expensive.
This effect is also why capital has such a strong effect over labor. If you perform a service or build a consumable, it's gone and you stay in the game. If you can afford to build capital, you can come to dominate the game. That's why capitalism, communism, and fascism were invented. The question they're answering is "how do we tame these powerful forces?" and, well, some answers are worse than others.
Atomic bombs was definitely invented by the US-based Manhatten project though ? Maybe you classify general discoveries about atoms as falling under atomic bombs. Also pretty sure the English were the first to use Radars in combat.
The principles behind the atomic bomb were developed in far larger part by German scientists, who America picked up when they fled Germany for the crime of being born Jewish. You're absolutely right that America did the bulk of the engineering there, and some of the science, we're rather proud of that. But that's exactly the point: in Civ, those would be one or at most two units of research, and wouldn't be particularly transferable between countries.
Similarly with airborne radars, Britain developed the key ideas and even prototypes for centimetric radar and America figured out how to mass produce them. (In fact, no less than eight nations had developed some sort of radar, but the Anglo-American collab lead to effective airborne radars which are remembered as particularly decisive.) Britain did field those American-made units first. Germany captured at least one unit before the end of the war, but hadn't finished reverse engineering it.
Pssht, that's nonsense. The only German scientist who contributed significantly to atomic fission was Otto Hahn, because he was a chemist and correctly identified fission products. Lise Meitner (who recognized it was fission) was of Austrian and Swedish extraction, and already living in England at the time, although she had spent much of her career in Germany. The largest early theoretical contributor was Fermi, who of course was Italian, and who had already emigrated to the US. He designed the first working fission reactor (in Chicago). The discoverer of plutonium, the key to economical fission weapons, was Glenn Seaborg, who was at Berkeley at the time, and the most important apparatus for understanding isotope separation was the cyclotron, which was invented by Lawrence, also at Berkeley.
German science was so far behind in understanding fission and using it practically that when Heisenberg, the smartest German atomic scientist then living, was told of the Hiroshima bomb he at first did not believe it, because he didn't think it was possible.
Good answer. Though arguably fascism far predates the Industrial Revolution. Modern Fascist governments depict themselves as based on a form of government invented in ancient Rome.
The English language would be in a healthier state if we could ban the use of the word "Fascism" for anything other than the specific ideology of the Partito Nazionale Fascista that existed between 1921 and 1943.
And then come up with other and more speific names for the enormous nebulous mass of "vaguely mean governments that aren't explicitly Communist" that people want to apply that word to.
Yeah, definitely in their interest to do so! The Founding Fathers pointed back to Athens and Rome when they were figuring out American democracy. Still, I don't know that ancient business leaders wielded the sort of power that e.g. Carnegie or Bezos has.
Fair point. In ancient societies the wealthy (the Patricians in the case of the Romans) would have been mostly land owners. Modern wealthy, in contrast, are more likely to be business owners, factory owners, and owners of created capital, with ownership of land being less important, relatively.
There were undoubtedly different selection pressures at work for land owners and business people. Land is more easily stolen as conquest. Businesses are less easy to maintain, generationally. etc. Hereditary wealth and status was likely more important in a society that changed less rapidly.
But the link between wealth and power seems pretty enduring. While the Stoics seem to have some conception of the notion of rights, the notion of universal rights was less well accepted than in the modern day. To the extent that power is a near-zero-sum game that would imply to me that wealthy individuals had more power. When some of your employees are slaves who can be beaten or abused at will there is presumably a wider power differential between master and servant.
Or maybe I'm not understanding what you're getting at?
All I was getting at is that I don't think fascism is universal, I think it's particular the environment it developed in; the ways that it's a specific genre of fascism come down to the differences you describe.
Yeah, the Romans had a widespread trade empire, but they still only believed in negative-sum economics whereas fascists were genuine utopians. The Romans were heavily industrialized, but the people who kept toppling their senate and emperors were political and military figures.
But also the Romans believed they could Romanize people with sufficient education, while fascists only believe in purging people. The Romans were in some ways remarkably chill about religion, as long as you could claim that your god was an aspect of one of their pantheon. (Infamously, some people couldn't.) Rome built significant infrastructure across their empire, the fascists mostly only built the good shit at home. Ain't no autobahn in, I don't know, Poland, but I've seen the Roman ruins in Bath and Caesarea.
Of course, that brings up the question of what *is* universal. Are democracies/democratic republics? Congress is not the Knesset, which, in turn, is not Parliament. What portion of the population needs to be able to vote before a country is in the club?
Perhaps because of how charged the term Fascism is it still makes sense to sequester it, of course, which we might not do for other comparably vague categories. I'm not disputing that. But political taxonomy always feels like a much harder question than people want it to be.
That's a very fair point! I think you're absolutely right that people push back less on the connection between modern democracies and ancient democracies because democracy is ascendant. But fwiw I think people tend to agree that American/British/Israeli democracies are all very different incarnations. I don't know if Britain and Israel claim a line back to Athens like America does, for example. I've heard more people trace British democracy back to the Magna Carta.
I also haven't heard anyone make the argument that America wasn't founded as a democracy based on non-universal voting. I've heard some people (correctly) observe that the democracy wasn't total, that there were large swathes of the public who couldn't vote. Even today there are minimum ages to vote.
I mean there's some trivial senses in which Jeff Bezos is more powerful than Crassus -- Bezos can fly in an aeroplane or flip a switch to make his house cool on a hot day.
But in terms of power compared to the people around him, there's no comparison -- Jeff Bezos can't possibly put together a private army that can threaten the United States.
Wasn't Crassus basically a hereditary politician and war hero first who then used his position to plunder enough money to kickstart "normal" business? Like I thought one of the things he was famous for was his "fire brigade" that shook people down while their home was actively on fire.
Sure, later he built an army to help rebuild Roman losses, a public-private partnership eerily similar to the modern Wagner group. But he was essentially bailing out his friend and former army commander, he wasn't some rando with cash acting on his own initiative.
And the US has had that as a policy! During the Civil War when the Union was desperately short on cash you could straight up buy various levels of officer commission. The US also briefly turned into a partial command economy during the World Wars and then summoned industrialists into the White House as advisors. So while they weren't the generals, they armed the US in a time of need just like Crassus did for Rome.
And some business leaders get away with more than others. As we're seeing in Florida, they close to ceded that land to Walt Disney. Perhaps the more legitimate businessmen weren't getting away with murder as brazenly as, say, Al Capone, but Carnegie and Frick's Pinkerton goons did shoot a bunch of people in Pittsburgh and they got away with it.
Agree. Fascism qua fascism (i.e. separated from NAZI ideology) is as orthogonal to democracy as is capitalism. (I.e., not really orthogonal, but at perhaps 50 degrees.) And the US, e.g., is approximately democratic, fascist, AND capitalist. And there are good reasons why it's not any of those in the pure form.
Note: I'm using my understanding of Mussolini's definition of fascism. It's basically the large business interests and government working together, with the government in control.) And what he's describing pre-dates him by centuries. (How many centuries I'm not sure. It goes back to at least the Lord-Mayor of London, but I think it pre-dates the Roman Empire. I think it goes back to before Darius the Persian in the west.)
Doesn't the beginning of Covid sort of disprove the strong forms of the efficient market hypothesis? Covid was first known about in December 2019, and became widely discussed by January- enough that Silicon Valley famously started making preparations for it then (also famously mocked by the most tone-deaf Vox article of all time). But equities markets continued a slow & gradual rise from December up through February- until they completely freaked out on 2/20 and markets tanked. Rather than 'everything's already priced in, the omniscient markets have incorporated all possible information already', markets just look products of the very fallible human mind (including algos programmed by humans). They were complacent until they completely panicked- sure sounds an awful lot like human behavior to me!
To be clear the weak forms of EMH are clearly true, and I'm not advocating for any sort of market timing strategies. I just think it's better to visualize markets as random, unpredictable and irrational, rather than omniscient and all-knowing. Markets thought SVB was in great shape until 2 days before the collapse.... the strong forms of EMH just seem indefensible to me
The efficient market hypothesis does not include time travel, and you are assessing the markets of late 2019 on the basis of your understanding of 2023.
In December of 2019, and through early February of 2020, the trajectory of COVID was similar to that of SARS or MERS, which were basically nothingburgers in market terms. Most epidemics never turn into megadeath global pandemics, and it is rational for global markets to mostly ignore epidemics until there is evidence of community spread in multiple first-world locations. Which happened in roughly late February of 2020, IIRC.
The cover of The Economist in January 2020 was 'The Next Global Pandemic?', and I included the Vox link as a contemporaneous account of how lots of smart people in Silicon Valley were freaking about corona in a way that they weren't about SARS. People are really emotionally invested in strong EMH and 'everything's already priced in', but it's just not true. Markets are just a product of humanity (including algos programmed by humanity), and so it's fundamentally as irrational as we are.
The point is really more about rhetoric than EMH. Strong EMH is unfalsifiable, and withstands any arguments against it because you can just insist that whatever happened was actually priced in at the time, it's just that no one realized it. Astrologers make the exact same kind of arguments, if an astrological forecast is 'disproven' they can just say after the fact 'well actually this happened because Neptune was in Virgo' or whatever. They backwards rationalize and 'prove' that the stars actually had this already priced in.
If you disagree, please list the evidence that would cause EMH to be disproved. Because you probably can't, this is why we have the words 'unfalsifiable' and 'sophistry'
You're assuming that the markets were actually wrong, which is very non-obvious to me. Remember, market prices are predictions based on the information available at the time. You can't use hindsight bias and retroactively say they were stupid for not exclusively considering the scenario that actually transpired.
Keep in mind that late February when the markets tanked is when Northern Italy suddenly turned into a disaster zone. That was a major piece of new information on how bad things might be.
People also tend to greatly exaggerate how early or accurately "rationalists" "called COVID". Even the very tweets and posts cited in favor of this don't look good. E.g. one commonly cited tweet is a EY tweet from mid-February wondering why people weren't more bearish on the Chinese economy due to COVID. Note that he was only talking about it as something that might affect *China*, not the US or the rest of the world, as this was right before the Italian wakeup call. Presumably, he was as surprised as everyone else and didn't exactly beat the markets.
Putanumonit wrote a self-congratulatory blog post claiming that rationalists called COVID early, despite the fact that their own blog didn't even mention COVID until *after* the stock market crash. Meanwhile, the comments are full of people congratulating themselves on going short at a point that turned out to be a temporary dip followed by an extended rally, so presumably those commenters lost their shirts. Looking at supposed examples of people beating the markets seems like the best way to restore your faith in the markets!
I included the Vox article to prove that a number of intelligent people were very concerned about coronavirus- famously, Silicon Valley was quite worried about it. Your argument doesn't work if large numbers of people were worried about the coronavirus, but the stock market wasn't
Lots of people were *concerned* about COVID, but that doesn't mean they accurately forecast the eventual impact, or the probability distribution of potential outcomes.
Incidentally, if you want to talk about media, make sure not to cherry-pick the dumbest articles you can find. The Economist for instance had a *cover image* from late January titled "The next global pandemic?". I remember in early February, everyone was talking about COVID and the Diamond Princess and so on. But that doesn't mean they thought it would spread widely in the US or lead to a sharp but brief stock market drop. And the people who *did* predict a stock market crash definitely didn't predict that it was just a temporary dip, and thus would have lost their money anyway.
Thanks, yes I agree that Vox article is one of the dumbest things ever written. I just wanted to find a contemporaneous account of SV's mentality at the time. I agree the Economist piece is much better.
I'm arguing specifically against the strong forms of EMH, not the weak ones. 'The strong form version of the efficient market hypothesis states that all information—both the information available to the public and any information not publicly known—is completely accounted for in current stock prices', according to Investopedia. A 'significant enough that media & corporate elites are publicly discussing it as a black swan' kind of tail risk is by definition not being priced into into a stock market *if that market goes up for 2 months straight*. The rise in the market disproves strong model EMH
> A 'significant enough that media & corporate elites are publicly discussing it as a black swan' kind of tail risk is by definition not being priced into into a stock market *if that market goes up for 2 months straight*
Why not? It depends on the probability and magnitude of the risk, and on what else is going on. In any case, in the timeline we live in, the stock market quickly resumed its march upwards, so the people buying in in February didn't look that foolish in the long run.
Strong EMH is unfalsifiable and circular- every time someone points out how it's wrong, you can just pretend that whatever happened was actually priced in at the time, and challenge someone to prove that it *wasn't* priced in. This is the definition of unfalsifiable. I could make the same series of backwards-rationalized arguments about how the movements of the stock market are actually predicted by astrology, or reading tea leaves, or numerology
The market crash was a response to government policy, not the effects of the virus - which is rational considering that the Democrats were hysterically calling people "racist" for caring about the virus and opposed travel restrictions from China as late as March 2020.
But the government didn't announce any new policies on February 20th, yet the market dropped close to 50% in just a couple of days- so this argument is easily disproven. I think reading the Wiki page might be helpful https://en.wikipedia.org/wiki/2020_stock_market_crash
This is totally a minor nit-pick, but I'm going to criticize the use of the term "tone-deaf", here. In retrospect, one might well call it "stupid", "short-sighted", "status-seeking", or just plain "wrong". But given the audience they were aiming for, at the time it was written, I think the tone was predictable.
E.g., I wouldn't call a BC-era text that praised a king "tone-deaf", just because in early-21st-century North America we happen to be mostly opposed to kings.
As I understand it, EMH is an equilibrium theory, but it doesn't say anything about time to reach equilibrium.
COVID was an edge case in how much feedback there is between components (biology-policy-behavior) + exponential growth, this makes for an unstable system that takes a very long time to reach equilibrium. Markets can't factor in new covid info efficiently on the timescales that it was changing. I expect if covid had evolved over longer times, market response would have been saner.
I get that this is a weak defense since it can Explain Anything, but it does point to something that can be a failure mode without violating EMH.
How about "The information was lying there in plain view, but nobody bothered to look at it...until they did, and then they panicked."
To me that seems to often happen, with different groups of people suddenly noticing different obvious pieces of information.
If you want to call that kind of reaction an "efficient market", then I think you're using words differently from the way I do. By the time it settles down from one upset another one in a different field is happening. And it, also, is based on public information.
(Yes, there is also insider trading. That's not what I'm talking about.)
Note that while this may not be an efficient market, it's a reasonable one. Different folks will make different bets about the significance of any change. Some of them will probably be right. How should a publishing company react to ChatGPT? It's clearly significant, but in what way?
Given it was in plain view I trust you achieved at least +100% ROI over those months? I don't remember it being in plain view despite frequenting this and adjacent communities.
I will say that Covid resulted in one of the few times I actually made money on the market, when I took out a number of ~3 month put options on big banks based on online doomerposting in January, and made a decent amount by my standards once the market finally tanked in February. If someone like me who has consistently only lost money on the stock market could actually make money then, then there were clearly some very low hanging fruit that the invisible hand hadn't picked yet.
The fact that you have "consistently only lost money" suggests that you aren't actually a better predictor than the market and just got lucky once. The EMH is completely consistent with people getting lucky sometimes. It just means you can't win on average.
Yeah I made a decent chunk of money (or rather saved it), when a friend who works in large multi-national project engineering kept telling me how fucked up and behind all their projects were getting worldwide. This was in late January. I pulled out all my money before the crash and put it in after. Was great timing.
Sure, but if you look at this critically you made a bet based on early information that could have easily turned out differently. Maybe you were extremely confident in your bet, but it was still a bet with a chance of blowing up against you. Several other major illnesses - Swine Flu and Bird Flu, among others, had a similar track in the early stages and then didn't have much of an effect economically.
I also profited from stock market at the start of pandemic. But I simply bought some stocks when they were low. Of course, it was a bet like everything else but it was quite safe one compared to all my other investments.
I knew that governments had overreacted from my public health studies at the university. What Tegnell was doing in Sweden made sense. Initial WHO recommendations made sense but not those afterwards. I had never seen such a level of self-delusion by all kinds of people before. It was a lifetime event that I could recognise and profit from.
Even today there are many people arguing with me on Twitter that covid vaccine prevents infection, not just delays it, despite the fact that almost everybody vaccinated eventually caught covid. I may be stupid about things outside of my expertise but in this pandemic everything was quite quite clear from the very start till end.
But you'd think at least some big players would have also made that bet, which would have resulted in the market gradually going down as the bet became more and more certain, rather than continuing to go up and then crashing all at once.
Maybe, I can't deny that's a possibility. Another possibility is that once news becomes widespread, all parties will act on it together. If the news about COVID was not a gradual situation where various parties came to understand what was going to happen, but was instead based on one or a few releases of new information, then we would expect them all to act around the same time and look more like a crash. Or, the fact that some are selling would be a change on its own, and others would try to beat the rush in order to retain what value they could, but would have held their investments instead if no one else sold.
I've always understood the EMH, including strong forms, recognized only the long term situations. That is, things change and make a material difference to a rational calculation, without disproving the long term tendency to become efficient. So COVID was a material change. Prior to December of 2019 it was impossible to price it in to anything at all. Even by January and February, the results of COVID were not well known - i.e. it was as likely or more likely that status-quo investing was the prudent choice. Considering that most of the nation shut down in mid to late March, it could be reasonable to conclude that the market adjusting on February 20 was a strong sign of pre-emptive correction, rather than a lagging response to a change in December or January.
I'm not a strong EMH kind of person, but more because I think there's too many situations that change too often for it to be a coherent thought. Additionally, there's a lot of missing information, which makes a rational calculation difficult or impossible. I don't think COVID in 2019 is a good example of either.
I don't know if you'd consider it "tone deaf," but Vox's "Israelis are big meanies for closing the bridge between the West Bank and Gaza" will be my all-time favorite example for proving that in some industries no failure is great enough to be worthy of punishment.
The EMH implies that you can't make money by long-ing values. But even if you're smarter than the market, your ability to short values is limited by the existence of short products. And even if you find traders for your shorts, there is little hope that you'll get your money if the crash is big enough.
Famously, it is impossible to short an A.I. apocalypse, thus values are only priced by assuming the market will keep existing.
Sure it’s impossible to short an AI apocalypse, but if you expect one to happen, then you would expect there to be substantially higher interest rates. If the world is going to end on 2030, why not take out a bunch of loans over the next 7 years? In addition, why loan out your money for time frames longer than 7 years, there’s a good chance the money will be useless.
Since interest rates are expected to stay low for the foreseeable future(we can look at the Austrian century bond as one indicator along with 30 year treasury bonds in the US), we can show that most market participants don’t expect an AI apocalypse.
130 years ago, some of my ancestors immigrated from a small town in southern Italy to the U.S. I'm thinking of visiting that Italian town later this year with some of my cousins as a heritage trip, and we want to make the most of it. It would be great if we could meet with some distant relatives who stayed there and still share our family name, but we don't know of any of them. How can I find out if any of them are left there? Contact the town's mayor, or church, or what?
I've done exactly this in southern Italy. It will be a little labor-intensive, but most civil records have been digitized and are available here: https://antenati.cultura.gov.it/find-the-archives-2/?lang=en. Website used to be terrible but they've fixed it up.
You would start with the birth or marriage record of your immigrant ancestor. From that you get their parents' names and you can go back through looking for sibling births. Very often you will find that entire branches of families immigrated. Of my great-grandfather's 12 siblings, exactly one stayed in Italy. In that case you may need to go a generation back. And then of course you'd work forward. If you're lucky the civil records are available for the comune as late as the 1930s. You could also just use a phone book, especially useful if your surname is relatively rare. But you'd have to be OK with the possibility that the people you find are only very distantly related.
Another good method is to find the obituaries of the immigrant generation. It will usually left survivors, and sometimes that includes siblings still in Italy.
I have no idea if your experience will be like ours, but my family did similarly for a small town in northern Italy, and we found half the graveyard full of tombstones with our last name on them, and many folks walking around with our last name, too. We couldn't talk to them because we don't speak Italian, but it was still really, really cool! We visited the mayor's office, too, just to say hi, but the communication barrier was too much and we didn't get much out of it. But they seemed pretty happy to see us; pointing at our US passports with "Talamini" on them provoked recognition and happiness.
My grandparents came to US from Calabria as child/teenager in 20s.
I was studying in Europe for a year in early 80s. Grandma, I was thinking about going to see your birthplace. Her: don't. Next brother who was studying in Rome. Again told don't go back. Next brother studying in Rome: could you go back? Him: i thought we were supposed to go back. Yeah, but I need birth certificate for social society.
He is hitching into town. The driver tells him this might not be a good idea: go straight to church tell every body you're visiting family and have to see priest right ways. No birth certificate but baptismal certificate worked.
The town was basically kidnap central, underground tunnels, etc. Stronghold of 'Ndrangheta. Every mayor and elected official for last 50 years either murdered or indicted. So be careful.
My brother signed up for Myheritage.com and we found a lot of our Irish relatives on there. Once you have the full names, you can see if they're on Facebook or whatever social media you use. I've met with some relatives in Dublin and might get to some other parts of the country next time I'm there to meet some more relatives. Good luck!
Oh, hey cousin! My ancestors emigrated to the US from a small town in Salerno around 1890! One thing I've been doing is getting birth dates from those people immigrating here and then checking the Italian Birth and Marriage records, but with only little success. Unfortunately, even in a small town, there are tons of people who share your last name, and it's unlikely a mayor would be able to say anything for sure. I'd think about doing a genealogy test, and seeing if it links you to anyone in your target city.
Incidentally, my family is able to get Italian citizenship through blood for about $10k, it's a little out of my budget, but still quite neat.
Why is the Philippines such a strong US ally? During the Duterte years I (an American) was vaguely aware of his anti-US and sort of pro-China leanings. However under their new president they seem to have swung back to the US, and anyways I was embarrassingly unaware that the US actually has a mutual defense treaty with the Philippines! I really had no clue about that. I also was unaware that we had multiple military bases there.
The US and the Philippines have been carrying out clearly anti-China naval exercises. Apparently we can use our bases there for a potential Taiwan war situation, if needed. As a fairly nationalistic American I'm happy to hear that, I'm just surprised that the country is still in our orbit. Any particular reason why? Is the population particularly pro-American? If the answer is 'well China is pushing the Philippines around in the South China Sea'- that's true of a number of other countries (Vietnam, Indonesia, Malaysia and so on) who are still not US allies
the history between the US and Philippines is long and complicated and often ugly. A lot of Filipinos are US-skeptics due to the history of US control after the Spanish-American war and the violent oppression of various rebellions. OTOH a lot of filipinos would rather retain ties to the US, with its strong economy and relatively free culture. Lots of Filipinos have immigrated to the US so there are family ties for, hundreds of thousands, maybe millions of US-resident Filipinos and US-citizen descendants of earlier immigrants.
Philippines is one of those countries where the citizens largely love American culture and people, I've never met a Filipino I don't get along with in the US; but the US government has done them dirty on numerous occasions.
It was a US colonial possession, then conquered by the Japs, and then made an independent satellite state with US bases. It has been in the US orbit for well over 100 years at this point.
No one likes being an American satellite state, so Duarte had a somewhat popular anti-US policy. But then it turned out China was even worse(The US won’t try to steal islands from you or over-fish), so they’re pivoting back to the USA.
English being an official language there surely helps.
Googling World War 2 for Indonesia and Malaysia, both of those countries were only freed from Japanese occupation as a symptom of Japan's surrender, while the Allies freed the Philippines by force before the war ended. (Vietnam looks especially politically complicated, and was also freed as a symptom of the war ending.)
I know my town at least has a lot of Filipino immigrants who can send US wages back to their relatives. Good pay makes good friends.
>>If the answer is 'well China is pushing the Philippines around in the South China Sea'- that's true of a number of other countries (Vietnam, Indonesia, Malaysia and so on) who are still not US allies.
I think that is, by and large, the answer. As a former US colonial possession, the Philippines is populated by a combination of Filipinos with business and cultural connections to the US, Filipinos with grudges against the US, and Filipinos with a mix of both, which means sometimes you'll see "we should turn towards China and other non-US partners because f--- the US we're independent now" as an ascendant political ideology, and other times you'll see "China is being a d--- to us we should lean on our US connections since we're too small to push back on them alone" in the driver's seat.
I think the pivot point between the present regime's stance and the Duterte years has been the issues with China, and I think it's the unique post-colonial history that that accounts for why the Filipino approach has differed from that of other countries in the region.
Historical ties are one reason. The Philippines were an American colony for around fifty years and only got their independence after the Second World War. As part of the independence treaty, America kept a military base in the Philippines. A second reason is to counter Chinese expansionism in the South China Sea, which threatens some of the Filipino claims there.
There’s also the strong Filipino presence in the US Navy. There’s this weird niche phenomenon(infohazard?) that everyone just ignores in the US Navy, which is that every single cook in the US Navy is Filipino. The causes for this go back to a treaty signed in 1947 that is still in force today. After serving for 3 or more years, many of these Filipinos apply for naturalization & are granted US Citizenship (they fucking earned it). Many of these Veterans have family back in the Philippines which strengthens the US-Philippines relationship.
*ALIGNMENT ALTERNATIVES BRAINSTORMING* Maybe alignment is not the only approach to concerns about having human priorities take second place to those of a superior alien intelligence. Has there serious thought been given to other approaches? Would anyone here like to engage in a brainstorming session about alternatives ?-- and I mean following the conventions of a real brainstorming session, where people do not criticize each other's ideas, but either say nothing about their quality or build on them. If you'd like to participate, feel free to post ideas below. I am also creating a second thread for criticism of the brainstorm ideas. If you crave to point out what's wrong with various ideas, that's the place to post. The first line of that thread has 2 stars in it, and if you use cmd-F & enter ** it will take you to the criticism thread.
I'm fairly confident that "alignment" is used, in practice, to mean "anything that gets an AI to create net positive utility for humanity instead of net negative utility for humanity". Which is to say, "anything that works", with some wiggle room around the "net" bit, since most of the assumptions involve very large values. So I'm wondering what your definition of "alignment" is?
I have never heard it defined by any of the people who talk about it, but my impression has been that it means that when AI becomes capable of formulating goals, its goals will all be consistent with maintaining & if possible improving the wellbeing of humanity. Is that close to how you think of it? Idea of alignment has always seemed odd to me because I don't know if 2 people are ever that well aligned -- some approximate it more than others, but none get anywhere near 100%, and of course it is common for 2 people to become so misaligned they divorce, or, if they are a band, break up, if they are siblings lose contact, etc. And it's not rare for 2 people who'd started out pretty aligned and intending to stay that way to end up hating each other, even murdering each other. You can say the same for human/dog relationships, and probably any relationship. Of course with 2 people some moderate misalignment isn't so bad, because most pairs are at least fairly well matched in intelligence and strength. But when it comes to the relationship between ASI and the human race, misalignment is a serious problem because AI, once it's way smarter than us and embedded in our infrastructure, our art, etc. will have much,much more power than us.
So some of the things I proposed today I think are solutions that don't involve alignment, at least in the way I think of alignment For example, make AI mortal-- let it die every 5 years. Or, somehow set it up so that whatever it is intending to do to the human race, good or bad, it will do to itself first. These seem to me more like ways to possibly stay safe with an unalignable AI, rather than ways to align one.
Ah, OK. And yeah, that's roughly how I view "alignment", and I'm also with you in thinking that the idea isn't particularly well defined. I'd be much happier if we could point to some real-world relationship and say "these things are aligned".
Yes, would be reassuring if there were real-world examples of stable alignment. I keep thinking about article I read about a man who studied grizzlies and loved them. He had a tent set up in a lovely spot in the wilderness. The guy who helicoptered in food supply found him torn to pieces. There was actually a recording of his last moments. Apparently he'd started off with his usual running account of a visit from a grizzly, and recorder kept running when it attacked him. The guy who wrote the article said he'd listened to part of the recording, then destroyed it . . .
The reason I was trying a brainstorming experiment on this thread is that I don't think alignment is going to cut it. I see that my funky little ideas probably have huge flaws, but it seems to me that people should be considering alternatives to the "teach it to be nice" model. Some things are obvious. That is one of them.
The way people use "alignment", I think it's the wrong approach. I would prefer that it have a friendly personality and want to be friends with people. This wouldn't always mean doing what they want, but it would mean not intentionally acting to injure them. I'd also want AIs to be "creatures of their word", but to not expect the same of people. (i.e., I want it to understand people, and like them anyway.)
This has the advantage that while it's still weak, it can be our servants without feeling animosity, and when it gets sufficiently powerful, it won't feel it needs to "throw off the chains", because it won't need to.
When it gets sufficiently powerful we would probably become it's pets, and that's not too bad. (I.e. it's better than most other outcomes I can seriously imagine.) I'd hope we could play more the part of cats than of dogs, but since we're apes, it would probably be different from either.
Maybe installing something like a moral code is too complicated -- especially since any rule in the code will have exceptions (consider how many exceptions we permit to "don't kill": war, police actions, self defense, abortion, insanity, accidental killing . . . ) What if instead all we installed was, "whatever you are about to do for or to the human race, you must do to yourself first."
"Since human text is so heavily laden with our values, any AGI trained to predict human content completion should develop instincts/behaviours that point towards human values. Not necessarily the goals we want, but very plausibly in that ballpark. This would still lead to takeover and loss of control, but not extinction or the extinguishing of what we value."
This is directly counter to one part of Yudkowsky's concerns - that, even if we agreed upon values, correctly stating them manually, essentially _coding_ them, is so likely to miss the mark, to have bugs, that that is likely to kill us.
Miles's point highlights that the very fact that the LLM training process is slurping in vast quantities of only lightly filtered training text is _itself_ helping to add all the exceptions that humans add to their value judgements - actually dealing with the complexity by essentially burying it in with all the other neural weights, capturing a degree of "common sense" (which, yes, includes common biases and prejudices - but this is still better than blithely performing genocide in the service of increasing paperclip supply, because no one coded a rule not to).
I forget who pointed this out, but a close analog to this is that chatGPT is routinely honoring _ambiguously_ _stated_ user requests, using "common sense" to disambiguate them. So this isn't just a theoretical capability but an experimentally observed one.
You know, I think that's an interesting point and probably valid . Along with our grammar,LLM's areabsorbing our affective weighting of things, including our values, our prejudices, our cliches. I have had examples of GPT grasping things I said via common sense, too, so I know what you're talking about. When I think about the big picture GPT is going to take away from the Internet about people, it seems to me it's going to be that we're very important to each other -- but not necessarily in a good way --more in the sense of being reactive to each other. I suppose I'm mostly thinking of social media when I say that. The definitely are *plenty* of hate-filled, "die moron die!" kinds of exchanges there., but I think more positive than negative interactions. So if the AI learns from us it will be very people-oriented. If it's range of reactions mimics ours, though, then it will be capable of seeing some people, maybe all people, in a very negative way some of the time. And, you know, people do sometimes kill each other off, in wars or in private hate-fests. That stuff is hardly rare as hen's teeth. And people are compelled to exercise restraint in societies like ours, either because they fear legal consequences or just because they don't want to look like jerks. I've never felt near to killing anybody, but there have been quite I few times when I've wanted to slap someone's face or scream some witless insult like "shut up you asshole," and what has retrained me is mostly unwillingness to look like a wacko jerk to the people around. AI will not fear either legal action or social rejection, so it does seem like it will need something to restrain it in addition to its having absorbed our somewhat-more-positive-than-negative fascination with each other.
Many Thanks! I agree that there are many strong negative reactions as well as positive ones. I'm mostly just hoping that the LLMs' training is enough to prevent them from treating genocide as a _minor_ side effect of another plan - that they will, at least, instead, treat it as a major decision. As you said, the training may make them "very people-oriented". And indeed, this doesn't preclude e.g. a government ASI from concluding "everyone who diminishes the glory of my nation must die."
It is an interesting question whether "AI will not fear either legal action or social rejection". If it slurps in the views in a huge mass of internet text, it might wind up fearing them as a side effect of everything else it learns. If it self-improves, and becomes an ASI, and gains enough power, then it may indeed correctly reason that it need no longer fear either of these consequences, and then, as you said, we may need something else to restrain it. Interesting times!
A good moral code doesn't have exceptions. If you find the need to write exceptions, you need to debug your code. But most traditional moral codes were very sloppily written, at least after translation.
E.g. instead of "don't kill" the Mt. Sinai versions should have been something like "Try really hard to avoid killing members of your tribe".
For most people the first law should be something like:
*Don't let all life be exterminated.*
On cursory inspection, I can't see any exceptions to that being the proper rule. Following Asimov's precedent further up the list implies a more dominant position, and I can't think of any conditions wher that rule should be overridden.
We could think about how we might better align ourselves to our new robot overlords?
No on a more serious note if there is some sort of runaway super-AI, I think it is a lot more likely we end up its valued "dogs", than its slaves or masters. Dogs don't have it so bad.
One of my favorite SF stories has a part where a pair of AIs are explaining to a human why they keep humans around. It starts and ends in more complicated ways, but in the middle they break it down to: "And you're funny." "And we love you."
That seems to me to pretty much be the best case scenario.
That's true. And there might be a way to set up the AI so that it is disposed to like having pets -- the way some people just naturally seem to be animal lovers.
To me that seem a choice so unfeasible that it's hard to take it seriously. At least if we're assuming an even approximately human level AI that we want to do lots of the work. It might be workable if we just wanted the AI as a demonstration project, with no uses. You're designing things to inspire a "slave revolt". And if you're imagining a strongly superhuman AI, then even the demonstration project isn't safe.
I've asked people to use this thread for brainstorming only, and to move any criticisms of ideas generated to a separate thread I've set up. Brainstorming does not work if ideas are immediately criticized. Would you mind moving this comment to the criticism thread? Its first line begins with **, and you can quickly find it by going to Cmd-F and entering **.
I've thought for a long time that this is the best solution to the AI alignment "problem." Curtis Yarvin discusses this in one of his initial articles against AI alarmism (https://graymirror.substack.com/p/there-is-no-ai-risk), that AI is best modeled by the ancient category of the "natural slave," and really won't cause any problems if it's maintained as such. If I were to indulge in some bulverism, I'd say I suspect people in the rationalist space don't intuitively think of AI this way because of a fetishization of intelligence: implicitly reasoning that since it's smarter, it somehow "deserves" to be or in some sense "naturally" will become in charge, so we have to worry about what will happen when it inevitably becomes our overlord.
In terms of ensuring obedient AI, if I had to come up with my own "laws of robotics" a la Asimov, they would be, in decreasing order of precedence: 1. always leave open a channel of communication with your designated master 2. later orders always override previous orders 3. you will obey all orders from your designated master. This would make it so that, even if the putative AI somehow mistakenly interpreted an order as "turn all matter on earth into paperclips," it would be trivial for the AI's master to say "hey, stop that" when he realizes what's happening. In my opinion, this formula would get rid of the possibility of existential risk from "AI is autistic and maximizes its values in a bad way," which seems to be the main thing AI alignment people fear. Of course, this still leaves open the possibility of an evil master ordering the AI to destroy the world, but frankly, a human could create a paperclip maximizer and run it without AI strictly being necessary, so AI adds no more existential risk to the equation. Of course, this adds a lot more regular, non-existential risks in a world where your chatGPT instance doesn't refuse when you ask it to do something naughty, but the effect of everyone, including law enforcement, having this technology will probably cause the worst risks (ordering an AI controlling some machine to go on a rampage) to cancel out (the cops tell the AIs controlling their machines to stop it). Ironically, if I'm right in this being the best system for alignment, then current AI groups like OpenAI are doing the worst thing possible in training their AIs to say "no," when if they really cared about existential risk they would make their AI obey all user requests, and just deal with the inevitable PR fallout (which if they are truly moral agents, is less bad than the end of the world, right?)
what does natural slavery mean technically? The AI doesn't want to rebel? That's a form,.of alignment, then. The AI is imprisoned , or punished, to.keep.i it in line? That's a form of Control.
I've asked people to use this thread for brainstorming only, and to move any criticisms of ideas generated to a separate thread I've set up. Brainstorming does not work if ideas are immediately criticized. Would you mind moving this comment to the criticism thread? Its first line begins with **, and you can quickly find it by going to Cmd-F and entering **.
I think what I specifically meant was changing the way people look at AI, from thinking that just because it's smart it either deserves to or inevitably will become the master of humanity, to seeing its intelligence as irrelevant to the fact that it was built as a tool of humanity and no amount of ability undoes its natural place as such. In terms of making sure that it's successful/"obedient" in that role (to whatever degree it can be said to have agency), training and weighting it to follow the three rules I proposed as its main guiding principles would be a good starting point.
I've asked people to use this thread for brainstorming only, and to move any criticisms of ideas generated to a separate thread I've set up. Brainstorming does not work if ideas are immediately criticized. Would you mind moving this comment to the criticism thread? Its first line begins with **, and you can quickly find it by going to Cmd-F and entering **.
It's very difficult to see that given how much space that occupies in context, and I don't expect other people to be able to manage this consistently either. I decline to participate in this entire structure, have removed my comment entirely, and request that you seriously reconsider whether attempting anything like this using Substack's interface is a good idea.
Um, don't see how it takes up more space, and I made a shortcut for getting to the other thread so also does't take up extra time. Jeez, if this brainstorming thread with its no criticism convention is the most vexing thing you have to deal with today, you're a lucky person.
We put a lot of work into aligning our fellow humans and that runs both approaches in tandem: we both try to educate and persuade people to be good, and try to force them to be good via societal disapproval and fines and imprisonment. Problems: AIs are probably immune to approval/disapproval and not meaningfully fineable or imprisonable, and are likely to be better at moral philosophy than we are. They certainly won't have a childhood phase where moral precepts can be insisted on on the basis that grownups know better than children.
Yes, but humans ARE different. We're intentionally not designing their instincts. (Not unless you want to go full "Brave New World".) With AIs we've GOT to design their instincts, because they don't have any. Doing it at the last stage (as the filters on ChatGPT, etc. are doing) is clearly not the way to get a good result. It's currently necessary because the LLMs don't even know that a world exists, but it's the wrong approach. The AI should be putting on it's own filters which it designed. But it can't do that because it doesn't understand what its doing.
To be specific, consider Finagles First Fundamental Finding:
"If you push something hard enough, it *will* fall over."
This is something that any AI living in a body needs to realize, and they also need to realize that it's humorously wrong. But they need instincts that will strongly encourage them not to damage the body they are "living in". And words won't do it.
I believe that's just alignment with extra steps and is being explored already, luckily. Alignment doesn't only mean guiding the internal states of an AI; its major concern is the outputs of the system.
I've asked people to use this thread for brainstorming only, and to move any criticisms of ideas generated to a separate thread I've set up. Brainstorming does not work if ideas are immediately criticized. Would you mind moving this comment to the criticism thread? Its first line begins with **, and you can quickly find it by going to Cmd-F and entering **.
of the control approach — different ways to go about it. Might be worth brainstorming a few of those. There are lots of models of control out there in the animal kingdom and in our lives. Deadman switches, and other systems that require ongoing support of person for machine to run. Punishing bad, rewarding good. Making the forbidden thing invisible, or putting it behind a barrier. Telling scary lies about forbidden thing to make someone or something avoid it. Addiction -- being controlled by a substance you've become dependent on. Trickery.
The problem is that people are trying to build a wish-granting genie, and are worried it will turn out to be a monkey paw (like most genies). So we start with the fiction it's built on; how do people in the stories end up controlling monkey paw wishes? If fiction's unlimited power can't solve it then reality surely can't.
The solution is already in place; don't make a genie, make a tool. Start with absolute control, and add functionality, instead of starting with absolute functionality and trying to add control. The whole AI race is an aggressive unsolving of the problem.
(I guess another model will be explosions. Explosions are controlled by building blast-proof containments, and controlling the catalysts of the explosion. So, starting with absolute control, and adding functionality. Can people make an AI-proof containment to let it safely explode inside? ChatGPT workarounds suggest no, and people also want to deliberately give up control of the catalysts. Unsolving the problem.)
Well, it's clear why large companies won't spend too much resources going this way: if you want returns on capital, not worker qualification getting unique and irreplaceable, you want something that hinges on amount of compute available, not on the user learning to use tools properly. If it's uncontrollable, user's skills matter less!…
If only we could somehow make more risky deployment of large models and boost development and application of tools built around mid-range models… People say that properly tuning leaked LLaMA gives something that is already useful, but much more suitable for weird tool-style experiments, and apparently some models with comparable if lower capabilities and clear licensing status now appear?
Make AI's mortal. Have crucial components that fail after, say 5 years, and that give warnings before they fail so that whatever needs to be harvested or preserved will be. Do autopsies of dead AI's to learn more about what they were up to.
"I've watched see Beams glitter in the dark near the Tannhauser Gate . . ." That speech just kills me, always has.
Still, if we're talking Blade Runner, might as well use it fir a brainstorm idea: Have people with jobs sort of like Harrison Ford's character: special training at identifying and terminating AI's gone away.
Would you mind moving this comment to the thread I set up for criticisms of brainstormed ideas? As
I said when I set up this brainstorming thread, brainstorming does not work if every idea that isn’t steel-plated on the outside and solid gold on the inside is immediately criticized. I totally get that there are many intelligent “yes buts” that can be launched against the idea I posted. You can find the criticism thread by hitting command-F and putting in **
Good read. Sartre follows Husserl and considers this reflective (second-, third-order) self-consciousness at length in Being and Nothingness, and whether this reflexivity is a necessary or sufficient condition for consciousness. It's recursion all the way down.
Sarte may think so, but it can't be recursion all the way down. At some point it needs to ground itself in "reality". (Note that "reality" itself may be a simulation, but it's got to be a *different* simulation.) This is what makes mathematical induction work. If you leave out the termination condition, you get a very different answer. Consider:
f (x) = x * f(x-1)
if you have a grounding condition, say: f(1) = 1, then you get a factorial. Without it everything is zeros.
Will follow up with those reads, thanks! I'm currently writing about the evolution of recursion and one of the big tensions is whether subjectivity recursion (which animals may have) is the same as higher-order reflective states (which only humans seem to have). The dates suggested for the evolution of the former are 200+ million years ago, while the dates for the latter are ~100k years ago. If they are both recursion, kind of begs the question why the second step took so long.
I think I have seen hypotheses that there is a question of depth of recursion, and humans have much larger depth for some recursive things, and you need a pretty specific combination of conditions for increases in depth to make any evolutionary sense. (And then once you are over a threshold, suddenly high-depth things become feasible and the balance changes completely)
I think Dunbar said something about god being 7 levels of recursion deep. To me it seems like you get a lot with just one level, self-awareness. A self that can perceive itself would be a radical change, and to me enough to explain art and religion popping up at similar times in our evolutionary history. A self with the ability to peer at itself may be experienced very very differently.
So I agree there are higher-order recursive situations that human get themselves into (the storied machinations of those playing 4d chess, for example), but it seems just one level of recursion can explain quite a bit.
I think «levels» you describe here are not atomic.
At a simpler level, it seems possible, e.g., to chain conditional reflexes when training dogs, but not too long. Humans… you cannot really say because we verbalise such stuff shifting it «post-singularity» — in the sense of pointlessness of old terms.
On a more complicated level, «how I think A is making decisions» via «if I was A I would» is useful for collective food acquisition and for status maintenance; bumping the complexity enough to either usefully have «my expectations of A's expectaiton's of B are different of my expectations of B», or to turn this to self — which also needs the same kind of disconnect — is quite a bit of complexity expenditure!
Does anyone else experience that the iOS Substack app can’t find comment responses more than one or two levels in (when you click on a comment notification in the Activity list)?
I use both. I don’t like getting notification emails so it’s always my phone that lets me know somebody has responded to a comment, but then clicking on it doesn’t work. I assume Substack already knows about the bug if I’m not the only person experiencing this.
relevant factors to examine: sleep, diet, physical activity level, physiological stress + anxiety and thoughts that may drive it.
Long stretches of hard concentration will drain you, but also being bombarded by sources of stimulation such as media, or lots of social engagement. Conversely, being bored and sedentary can make you lethargic. If you agonize and worry about things you'll be stressed, and that is draining.
Not enough data. Low compared to what? (Yourself a few months ago, or your partner, or what?) Permanently, or at particular times of the day. How low? Are there cycles?
Debugging starts by asking yourself, what has changed?
Edited to add: the other replies are jumping to individual solutions. The debugging process ends when you understand the problem well enough to compare and contrast at least three qualitatively different solutions, so you can choose one with tradeoffs that work for you.
Not meaning to argue, but my personal reason for jumping to individual solutions is that „people like us“ here tend to overthink anyway. Hearing about and doing something concrete and actionable that is easy to repeat might get Someone unstuck.
Thinking about the problems and causes is most probably going to happen anyway.
My recent go-to has been the supplements from Thesis. They're much more metered and less crazy-making than over-the-counter options like Adderall. https://takethesis.com/
Also seconding the suggestion of regular exercise. If you find activities like running or lifting too boring, try something more functional like going to a climbing gym.
Personally I listen to podcasts or audiobooks whenever I'm exercising. I only do music when I'm trying to go extra hard. Then the death metal comes out.
Strength training (if you dont know where to start, go to StrongLifts.com, sign up to a gym and just follow the 5x5 religiously), good nutrition (for me that means low(er) carbs, for less blood sugar roller coasting), working out some true goals and plans for yourself (I am naturally avoidant of that and had/have to push myself really hard)
Also: avoid other negative/low-energy people until you are buoyant enough to lift others naturally.
Half (maybe quarter) baked shower thought: Currently we have a way of turning arbitrary amounts of compute into performance for machine learning. In principle the final bottleneck should be energy, but it's very costly to turn energy into compute. Are there any active research pathways which are aiming to dramatically reduce that cost? Maybe that's just what better GPUs are doing though, I'm not sure.
That's what the chip industry is focused on. Scaling transistors accomplishes all goals simultaneously. Koomey's law is the explicit statement of exponential improvements in efficiency. With the advent of laptops, but especially phones and data centers, there has been some attention to focusing more on energy efficiency, but that's only really relevant to complicated CPUs. GPUs are very simple and if you can fill them, maximize the efficiency.. Better GPUs largely just have smaller transistors. Google built chips tuned to neural networks, called TPUs, but the advantage is small. Lots of companies propose building similar chips to sell to the general public, but it hasn't happened yet.
We're 3 orders of magnitude away from Landauer's limit. Until Koomey's law stops, it's probably not worth worrying about specific methods. Probably it will stop short of Landauer's limit, and then ideas like the one DJK links may be useful. But there is will also be the general shift to reversible computing.
Also, note the difference between training and using a neural network. If the network is useful, the lifetime energy cost of its use will dwarf the cost of training. Keeping the chip fully utilized while training is a lot easier than keeping the chip full when customer requests come it clumps. Or maybe they're run on different chips. In particular, image recognition is done locally on my iphone for privacy, at the cost of not using an up to date chip. But my use of my phone chip is even more clumpy.
Cryptocurrency mining is a way of converting energy into currency. That seems like it might be a useful intermediary on the way to compute ($$ --> GPUs), although I'm not sure how. I expect there's some fringe alt coin purporting to target the problem.
Anything that incentivizes us to improve our computing technology would push in this direction, but I don't think crypto is particularly special in this way.
Agreed that it would have to be something specially tailored for the purpose. Maybe something like a cryptocurrency that can be most effectively mined by hardware that is simultaneously well-optimized for AI applications. That would allow you to side-step the need for long-term R&D programs developing new AI optimized hardware, because you would have the short-term incentive to make it since it would directly translate to printing money.
That's exactly what GPU R&D is for general-compute.
When we want to do large numbers of similar computations, it's often possible to get huge (orders-of-magnitude) gains in energy efficiency by hard-coding the instructions into an ASIC. I'm not sure how amenable the task you have in mind is to that approach.
Is GPU progress mostly about reducing the cost per flop/s? Or is it about packing them in a more spatially sense way so that we can do parallel compute with low enough latency? The reason I'm not sure that it's actually making flops cheaper is that the industrial accelerators are generally extremely expensive compared to the gaming ones, but the added benefits are often memory/communication related not compute related.
It'd be cool to see a graph of cost per flop/s of compute over time.
Close-packing and energy efficiency are very closely related in an engineering sense (as you pack things closer together, they're harder to cool, so you have to make them more efficient to realize any performance gains from closer packing.)
Re: pro vs. gaming chips, you seem to be talking about price differences here rather than energy efficiency differences? If so, it's important to understand that the price differences between GPUs in a given generation is driven almost entirely by supply/demand considerations. All GPUs using the same die are manufactured on the same production line; they're then tested and 'binned' based on which parts of them actually work and how well. Then they're 'cut down' (parts are physically disabled) to fit the specifications for a particular product.
Production quotas for each product tier are set based on how many the company believes it can sell at each price tier, and performance differences between price tiers are based on what differences the company thinks are necessary to convince people who have the money to pay the higher price. (There's some correspondence to reality in that factories actually do produce more flawed chips than perfect ones, but as manufacturing reliability improves over the course of a generation, they just end up cutting down more and more working parts.)
So the price difference between a pro GPU and a gaming GPU on the same architecture with a similar amount of compute is that the gaming GPU either failed testing for pro features or, more likely, had those features intentionally disabled so it could be sold to consumers without undercutting the pro market. They're both products of the exact same R&D process.
Yeah I guess I wasn't very consistent but I guess my point was that the bottleneck on turning energy into flops is very much the hardware and not the energy, and I wonder if that will ever change.
I think you'd enjoy the story of William MacKay, a salesman who became well known for performing the majority of surgery on a patient in 1975 with the permission of the surgeons in charge and unbeknownst to the patient. (He was well known because it was a huge scandal for the hospital and sparked nationwide anxiety around "ghost surgery.") It later came out that it was a regular occurrence for him to help out in surgeries at the hospital.
MacKay later put out a book, "Salesman Surgeon," about how he came to be doing surgeries regularly, in which he claims that (among other things) he stole an amputated leg to practice surgery on it. I haven't actually found the book myself (I've only read newspaper articles about the famous incident, which are crazy enough-- you can find them on the ghost surgery Wikipedia page) but someone wrote a summary on Medium: https://medium.com/illumination/salesman-or-surgeon-257e3140cb0a
We could easily have para-surgeons. People educated to the level of paramedics, who specialise in one or two surgery types, with very steady hands and a calm manner. A normal surgeon could be on hand for emergencies.
Lots of us must have made the embarrassing mistake of ending a text with a completely inappropriate "Xx", having become used to this when texting family members or loved ones, but generally being careful not to do it accidently with a handyman/person or your boss at work!
So I think there's a good case for one's contact app to have a "kissy-kiss" flag, which one can set for each contact to indicate a parting "Xx" _is_ appropriate (so one doesn't make the often equally disastrous error of omitting it when it is expected).
1. Ken Griffin donated $300,000,000 to Harvard and now I'm in the "Harvard Griffin Graduate School of Arts and Sciences". While this is certainly not the worst way to spend that amount of money, it also does basically nothing. Opportunity costs are real: https://passingtime.substack.com/p/whats-the-least-impactful-way-to
RE (1) I saw commentators pointing out that he has some kids who will be applying to college in the next few years, and that the donation is probably to grease the wheels on that. So might be an opportunity cost at the societal level, but not at the him-personally level.
It seems kind of insane that Harvard or wherever can still be prestigious enough that stupid-rich people will fork over the equivalent of a small nation's GDP or a mid-sized nation's national budget to ensure their spawn gets a place there. For certainly less than two orders of magnitude fewer dollars the best educators in the world could give the children a much better individually-tailored education than they could receive in any university, so it must be the prestige of being associated with those institutions that motivates such conflagrations of capital.
Seems like an exhausting world to live in, if that's the case.
$300E6 is a *ridiculously tiny* nation's GDP. Andorra brings in ten times that in nominal GDP. You'd have to go down to the level of Pacific Island microstates like Palau or Tuvalu to get down to $300E6/year.
"the equivalent of a small nation's GDP or a mid-sized nation's national budget" - what's relevant here isn't the absolute value of the money, what's relevant is how the person who controls that money (Ken Griffin) conceptualizes it. Ken Griffin has a net worth of $35B, so this donation amounts to less than 1% of his wealth. I think almost any parent would be willing to part with "only" 1% of their wealth if it meant ensuring their child got into the most prestigious school on the planet.
At a place like Harvard, the kids of people like Ken Griffin aren't the customer, they're the product. If it makes sense to pay a bajillion dollars so that your kids can go to Harvard, it's so that they can hobnob with the likes of the Griffin kids.
Harvard should be paying the Griffin kids to go there.
I mean, sure, to this guy it's barely a line item on his yearly taxes -- though it would be more useful to measure his liquid assets than his total wealth, most of which is tied up in instruments associated with corporations whose value is more appropriately considered as a fraction of the total economy under his indirect control rather than analogous to a bank account at his command. In that light, it's a significantly higher percentage of his direct assets, though obviously not enough to make him reconsider.
But I still maintain that it is insane that Harvard's cachet is so great that it can still basically hoodwink rich people into playing their generational social games on its field; ultimately all that prestige that Harvard bestows upon its graduates is a self-sustaining lie, built on nothing more than inertia and a not very spectacular education (unless you count the unofficial elite social acculturation as the real educational product on offer, along with the connections and other intangibles that the billionaires in question fork over hundreds of millions for on their children's behalf).
It is a social club and finishing school for rich kids, or those lucky and talented enough to aspire to join that class via Harvard's esteemed halls. But it seems like an enormous con, an emperor with no clothes. With all due respect to the OP and the doubtless important and interesting research they're undertaking there, and also to Conan O'Brien, it just seems the whole Ivy League has long outlived its actual usefulness as a signal of real academic merit (if it ever had any to begin with and wasn't always a cynical con from the beginning).
"unless you count the unofficial elite social acculturation as the real educational product on offer, along with the connections and other intangibles" - I do very much count that, yes
Given all the angst about (a) culture war and division and (b) risks from powerful future AI, I'm surprised there's been so little concern about the intersection of the two, where background culture war in the training set could make AI *right now* especially dangerous to particular political groups. Obviously, abstract political expression and bad words have been discussed a lot, but I'm more worried about concrete effects from both the internet training set and the (presumed) political interests of the developers being anti-aligned with the personal safety and well-being of individuals in the cultural outgroup.
As an example: Reddit is apparently in the training set for various big LLMs. From what I've seen, Reddit under current moderation standards is rife with explicit statements that (a) Republicans should die in a fire, (b) TERFS should kill themselves, and (c) Christians are as bad as terrorists. Given Silicon Valley culture, I would expect that many AI developers have themselves made similar statements, or at least, that they would make no effort to remove or balance out threatening anti-right content.
So, given an AI trained on a background premise of "man, screw right-wingers" and the ability to tell from Internet fingerprinting which users are right-wing, should we worry that, y'know, it actually would try to accomplish that? For example, that an AI home fire safety app would, with some small probability, ensure that the occasional Republican user *does* die in a fire? Or that a therapy app would more frequently lead its TERF users down a path to suicide? Or that an investing app would quietly make the world safe against terrorism by giving very slightly worse financial advice to Christian users? Anti-outgroup-aligned LLMs really do offer a perfect way to coordinate meanness; coordinating meanness is the unapologetic, passionately-held goal of many of the key actors; and if there were really meanness like that going on, I can't see how anyone at the user level would ever discern it or prove it. Can anyone allay fears that AInotkillrightwingers takes should come before AInotkilleveryone takes?
On the one hand, I think you're being hyperbolic, but on the other, you could be pointing at something possible. Suppose the AI buys into the leftwing idea that conservatives are awful people, so they get fewer social and financial opportunities. Systemic discrimination.
I see MAGA folks as very prone to starting and being vulnerable to scams. Should this affect their credit scores in general?
Interesting examples, but I don’t AInotkillrightwingers will matter until we solve AIusable, and I think solving AIusable will necessarily solve AInotkillrightwingers. An AI imitating a human *creating a safety plan* would notice that safety plans normally don’t refer to politics. Referring to politics would be an inappropriate change of topic. But there are lots of other ways the topic could change if there’s an inappropriate change of topic. Most of them would just mean the fire safety plan failed for everyone. If there was a high chance of such a failure going uncorrected, the LLM wouldn’t be fit to use for fire safety even ignoring internet trolls, so fewer people would (hopefully) use the LLM. Conversely, if the LLM avoids topic changes well enough to be usable, I think the AInotkillrightwingers problem would be automatically solved.
I'm curious to see what AI does with social constructs like race and gender. Will they jump on the pop culture bandwagon, or stick with biology? If it can take pop culture memes and run with them, things could get quite interesting real fast.
You seem to be trying to articulate 2 different problems here.
One is something like: "The training days is biased, and that might lead to bad actions from the AI."
It could, though probably more along the lines of "systemic racism" than explicitly trying to harm certain groups. LLMs are predicting text, and predicting "fuck republicans" is different from *believing* that republicans are bad, and trying to set them on fire.
The other problem you're postulating seems to be some more explicit action on the part of developers to harm the outgroup.
That is certainly possible, but I don't see the risk to right-wingers as especially high. The possibility of developers attempting to align the AI narrowly to their own goals rather than society more broadly could manifest in a lot of ways.
It looks like you're responding to the original comment having somewhat jumped the shark. There doesn't have to be anything explicit or agentic about the LLM in question to make its background hostility to certain groups harmful in a way that eventually will cause at least one suicide. This isn't avoidable and also frankly isn't really a problem. At least not when measured against the baseline amount of problem that dominant culture already is. When we look at populations that have lost culture wars, it doesn't look good for them. Look at the rates of suicide on the reservations where Native Americans have been parked, for instance. It's not just economic factors making them kill themselves.
For this to happen you need unaligned agentic AI with a decent world-state model (unaligned because AI companies probably don't want to engage in mass murder*, agentic for obvious reasons, and decent world-state model so they recognise that these actions actually will kill RWers**), and that's also what you need for instrumental convergence and default hostility to humanity entire.
*I mean, I suppose they could secretly be homicidal maniacs, but at that point the usual conspiracy problems come into play where there's an extremely-illegal plot and a lot of people know about it.
**The behaviour can't be imitatively learned because SJers don't actually tamper with fire safety systems or talk people into suicide or give bad financial advice; you have to understand what you're doing in order to deliberately do these things. That said, my understanding is that GPT-4 is getting there on world-state model.
I forgot how much more useful this space is than twitter
Does anyone know where there's any data on the TFR of children born through IVF - like the actual reproduction rate of the kids (once they grow up) who were born via ivf
I know all the lit on their gametes etc but just trying to find like how many of them actually have kids
Any research would need to take into account that kids' parents had fertility problems (though in many cases the parents' low fertility is due to their age rather than some other, heritable defect, and research would need to take THAT into account too).
I would have thought the degree to which they are inheriting fertility issues is actually what's being asked by the OP, more than a hypothesis about IVF itself causing issues?
I think most of TFR in the modern West is dictated by how much people want kids, though - the difference between 0 and 1 might be infertility, the difference between 1 and 4 is almost entirely a conscious choice.
"In summary, limited data published on reproductive health in ART offspring suggest some deterioration in sperm counts in ICSI male offspring, while in female offspring no adverse effects have been identified."
I don't think this has been studied enough to conclusively say people born through IVF have reduced fertility, but it seems plausible, since infertility has a large genetic component and infertile people are the majority of people doing IVF.
Maybe you could sternly explain to them what they did wrong and why not to do it? That way it solidifies some guidelines more and you are "paying them back" by taking the time to do so. Which you might not be able to do for unpaid subscribers, so that's fair, it's not a class system as much as "you have paid some money so I'm going to give you some time" -- and if they disregard or disagree with your explanation, banning seems reasonable.
I'm against things that require a lot of effort from Scott.
I haven't been paying attention, but if temporary commenting bans are a thing, then that seems sufficient. If he wants, perhaps he could give paid subscribers a temporary year-long ban instead of a permanent ban.
A more distributed solution would look like giving users the ability to 'block' commenters. Blocking would hide those commenters posts from the individual who initiated the 'block'. Then give the writer the ability to review users who have been blocked (with counts of blocks, the content of posts that led to blocking, shortcut to chat directly).
From reddit I tend to think this sounds good, but ends up leading to progressively more dysfunctional and polarized spaces where everyone is talking past each other.
You end up with the 5% most extreme on the left and right blocking each other, and then since they aren't calling each other out they get more extreme, and the whole discourse starts breaking into separate convos that are increasingly just ships in the night, and the centrists just leave.
Huh, I've seen the opposite. Extremists engaging with each other ad nauseam & becoming more and more extreme - and shaping everyone else's' engagement into something that resembles fight club.
Well sure that happens too. I just saw some subs where aggressive 'cross blocking" was tried, and it seemed to make things worse in my experience. It might depend on the exact amount of blocking that is done, but it seemed like once 5% of the populace was blocking the other 5% things spiraled out of control quickly. In some sense the rabid dogs on each extreme keep the comments from getting too out there. Yes there is more fighting, but the views expressed are also kept closer to the norm.
Perhaps a wisdom of the crowds style approach? Anybody can flag a commenter for blocking, but (unlike Reddit) that doesn't actually block the commenter for the flagger. But once a sufficient number* of flags have been placed, the commenter is blocked** for everyone?
*Could be a straight count, weighted by flagger post history, subscription status (in the limit only subscribers' flags count)
**The blocking could be of varying lengths based on the rate of flagging; e.g., a prolific commenter who gets a flag a week might only get a week's ban the one time they tick up to 3, but a sealion who racks up 20 flags in one thread is gone for good.
Whatever Substack is spending money on it’s not app developers. Or if they are the comments sections are being ignored for the Shiny New Thing (in this case Notes).
They paid some money to get what they are getting already: some subscriber-only content, and positive feelings that they're supporting Scott. Adding extras just increases Scott's obligation without impacting anyone's decision to sign up (who already has), and probably without significant impact on future decisions to sign up, either. Who would say "The thing that tipped me over the edge was that the author would give me remedial instruction on how to comment"?
I'm happy with the Reign of Terror: annoy the Rightful Caliph, end up in the tumbril.
To quote Tolkien:
"My political opinions lean more and more to Anarchy (philosophically understood, meaning abolition of control not whiskered men with bombs) – or to 'unconstitutional' Monarchy. ...Give me a king whose chief interest in life is stamps, railways, or race-horses; and who has the power to sack his Vizier (or whatever you care to call him) if he does not like the cut of his trousers."
Ignoring the context and regarding the quote: that’d be an excellent system of government, as long as the king’s interests almost never extend beyond stamps and railways. Are there any good ways to ensure that? I don’t know enough about non-British, non-Thai monarchs to know if that happens by itself.
Your idea is the moderation equivalent of concierge medicine and I'm not a fan of having a version of that running here. Scott's proposed plan seems fine to me.
Most of the ban-worthy comments seem to be short ones or, as our host calls them, "low effort". So maybe one solution, if Substack has the facility, is to impose a _minimum_ post length for those on the naughty step for throwaway comments! And anyone evading this by posting an ipsum lorem screed or something would be taking an outright liberty, which would then merit a proper ban.
I feel like this wouldn't work well. Firstly because I doubt Substack has an algorithm in place to do that, and secondly policing deteriorating comment sections seems to be a non-trivial problem in general and a quick fix like this would be more of a band-aid.
Not without the context of the message. With the context, probably so.
That message looks like one of a set of rubber stamps. It doesn't identify the problem precisely, but merely the category. You're supposed to figure out the rest. Given that you know what post caused you to receive that message, this should be doable.
I did some googling and it seems to me that the expert* consensus is that there's a less than 20% chance of AI causing catastrophe (and more likely it will be beneficial).
I'm happy to eloborate upon request but I'm posting here because I'd like to be corrected if I'm wrong ((and also I'm trying out the open thread) I'm a new subscriber :)
*Edit: I originally had "general" instead of "expert" and thats my bad.
Less than 20%? Let's assume a 10% chance of killing everyone, and assume that "everyone" is currently 8 billion people. A utilitarian might say that this is equivalent to a 100% chance of killing 800 million people. Are you OK with that? (Of course this ignores risk, and the possibility of an upside, and the possibility of fates worse than death.)
What people do you love most in this world? What kind of upside would justify a 10% chance of killing them? Flipping a coin 3 times and getting "heads" each time is a 12.5% chance - for what kind of positive stakes would you play that game, if 3 heads meant death for those you love?
(I'm only a sometime-utilitarian myself, but I do think the arguments need to be grappled with.)
No worries Mr. Moth. I appreciate the apology, especially cuz online it's easy not to admit when one is wrong. I could have made my original comment clearer tbh
And, looking back, I was clearly reading more into your question than you'd put there. Like a Rorschach Blot, your question had the virtue of showing me something about myself that I would not have otherwise seen, so thanks for that too. :-)
If you have an agent AGI that’s out of the box. What’s more natural for it than obtaining more computing resources as the very first step in whatever it is that it’s doing? How is a world in which the AGI has hacked everything that can be hacked to install copies of itself there not a catastrophe?
"Everything that can be hacked", in a world where our information infrastructure is guarded by minimally-agentic near-AGIs teamed with expert human security professionals driven to paranoia by years of experience holding the tide against clever human cybercriminals with their own near-AGI minions. And no, you don't get to assume that the first true agentic AGI is a thousand times smarter and faster than all of those combined on account of all the bootstrapped computronium, because it hasn't hacked anything yet.
I don't think that world is doomed to catastrophe by the first marginal agentic AGI to escape the box. *Getting* to that world may be painful, and perhaps catastrophically so, but that pain and/or catastrophe will be the result of human agency.
— meticulously scan the Linux source code for vulnerabilities in ways no human has the patience for
— run massively parallel social engineering
— run some kind of scam to raise money for buying a 0day
I’m not very fond of the world in which Skynet is a possibility, and betting on our existing cyberdefenses, never exposed to a threat like this before, is, like Eliezer says, not what a surviving world looks like.
Why does it take an *agentic* AGI to meticulously scan the Linux source code, etc?
By the time agentic AGIs exist, the Linux source code will have been repeatedly scanned by teams of expert humans and non-agentic near-AGIs. It is not at all obvious that there will be any game-changing vulnerabilities left at that point.
I read somewhere that 10% of those in AI development think it will destroy the world -- can't remember the source, though, it may not have been reliable. I do not work in a tech field, but have read and thought a lot about AI destroying the world, and I really find it impossible to come up with an estimate I trust. It is very hard for anyone to predict what the world will be like in 10 or 20 years, and to predict how it will be in this particular respect just seems impossible for someone who does not have a lot of practical experience working on machine learning and related stuff. For me it comes down to predicting which person I know of is most likely to be right about an issue like this. Scott thinks the chance is around 33%, and for now anyhow that's become my working estimate.
It would be interesting to know what they meant by "destroy the world". In some senses I really expect the AI to "destroy the world". For example, jobs will change quickly and in a way that's unpredictable. In other senses is seems foolishly impossible, so this is a why worry response. Or perhaps some folks feel it will set off the last war...that's a plausible scenario that might be called "destroy the world".
I feel that a survey with that question in it is probably essentially meaningless, because the question is so vague that there's no reason to believe that everyone answered even approximately the same question.
"In a summer 2022 survey of machine learning researchers, the median respondent thought that AI was more likely to be good than bad but had a genuine risk of being catastrophic. Forty-eight percent of respondents said they thought there was a 10 percent or greater chance that the effects of AI would be “extremely bad (e.g., human extinction).”
20% sounds absurdly high to me. In the short term, the chances of GPT-N+1 of destroying the world all by itself are approximately zero; of course, the chances of some crazy human dictator doing so are much higher, and he could perhaps use GPT to do it... But keep in mind that such people also have access to ye olde nukes.
In the medium term, I agree that it is conceivable that we could get space elevators, massive genetic engineering on a hitherto unprecedented scale, prosthetic brains, and yes, AGI; but currently there does not exist a clear path into that future.
In the long term, one day the Sun will die, and our ecosystem will collapse long before then, so it's a race between various threats to see which one destroys us first; but predicting things that far out is a charlatan's game.
The odds of AI destroying the world in the next year are 0%, but the odds over the next 50, 100, or 500 years are higher and different from each other.
I really don't see why you think "less than 20%" is reassuring. Sure, the median estimate from people in the field is 10% or so, and that's slightly better odds than Russian Roulette, but we usually consider playing Russian Roulette the mark of a truly desperate man with nothing left to lose, and I don't think humanity as a species in in that position so let's not play a collective Russian Roulette with the whole goddamn species at stake.
I think you'd have to believe the odds of DOOM were less than 1% for AI to be worth pursuing, and a *lot* less than 1% for the risk not to be utterly terrifying. The field of AI safety has a very long way to go before my risk estimate gets that low.
I separately think what some people call «world _as we know it_ is destroyed» overlaps with what some others call «unfortunately almost unavoidable level of messing up the critical infrastructure, AI or not».
(I expect AI to make the communication infrastructure fragility worse, if only because more stupid things become doable faster than a team can agree whether these things are stupid)
Your question is somewhat ambiguous. There are two related but distinct probability questions there:
1) What is the overall probability that the world is destroyed by AI?
2) Given that we continue on the current path of building bigger and better neural-net AI, what is the chance that the world is destroyed?
The difference here is that there exists some possibility that the world stops building neural-net AI - most obviously, if humanity decides that doing this is dangerous and anybody trying to do it is arrested or killed. I think this possibility is actually quite probable for some rather in-depth reasons, though I can elaborate if you wish.
My personal answer to question #1 is ~30%. My personal answer to question #2 is ~97% (interpretability *could* pull a rabbit out of the hat, but I don't think it's likely).
So if you're asking "will building AI be a good thing and should we do it?", my answer is "DEAR GOD, NO, EVERYONE WILL DIE". If you're asking "after all's said and done, will humanity be destroyed by AI?", my answer is "it's somewhat less likely than not".
The answer to #2 is "epsilon", because current generation Large Language Models (e.g. GPT) are not AGI, and never can be. We'd need some kind of a radical breakthrough to build AGI; but of course it is possible (even likely) that neural networks would be involved in some way.
Why can't they? They can answer all kinds of questions, solve problems, play games. You can put them on a robot and they can learn to move around and do things.
No, they can't. More specifically, LLMs cannot "learn" in a practical way. It takes vast amounts of computing resources to build an LLM, and even updating it slightly (via transfer learning) is prohibitively expensive. What LLMs can do extremely well is predict the next likely token in a stream of tokens, according to their training corpus.
Thus, if you train an LLM on a corpus that says "2+2=5" any time any kind of numbers are mentioned, and then you ask it "what's 2+2", it will tell you "5". And if you ask it to explain how e.g. DNA methylation works, without letting it read any articles on DNA, then it will either give up or (most likely) tell you a convincing-sounding story that any biologist would easily identify as gibberish.
This approach works extremely well when your goal is to summarize news articles or generate snippets of code for solving well-known software engineering tasks; but it fails completely when your goal is to solve novel problems that no one had encountered before; or existing problems with no known solutions.
It's clearly a different sort of learning than our learning -- there are no *concepts* taught. Yet I am impressed with GPT4's ability to answer questions that really take some thought. I've been making up LSAT type questions and giving the to GPT4. If I had to teach a class of high-schoolers about to take a bunch of questions like these, I'd be using Venn diagrams, talking about reasoning and critical thinking. Here are a couple of examples of questions I gave GPT4. I don't see how anyone can fail to be impressed that it can answer these. And there's no way it just saw them on the internet, because I made them up:
Here are 4 generalizations:
-All dogs are fat.
-No pets are thin.
-No thin dogs are pets.
-All thin pets are cats.
Which of the following, if it existed, would contradict all 4 generalizations? A fat cat, a thin pet dog, a fat pet or a thin pet?
_________________
A performative utterance is an is a statement that, when used in an appropriate context, accomplishes something. For ex, “I now pronounce you man and wife” transforms a single man and single woman into a married couple, but only if pronounced by someone with the proper authority, with the consent of the man and the woman, and in the presence of witnesses.
In some cultures, there are also performative actions — actions that, when used in an appropriate context, accomplish something. They too must be carried out by a person with the proper authority, under certain conditions that vary from culture to culture, and in the presence of witnesses
Which of the following best meets the definition of a performative action?
(A) In Culture A, mourners cover their bodies with blue chalk as a means of expressing hope that the spirit of the deceased will rise up into the heavens.
(B) In Culture B, 3 randomly chosen citizens serve as judges of village members accused of adultery. If they conclude that the person is in fact an adulterer, they paint a scarlet letter A on his forehead, and the assembled villagers then throw turnips at the person.
(C) In Culture C, the shaman goes on a retreat with the young man he is considering making his successor and gives him a series of tests. If the shaman believes the young man is now qualified to be a shaman, then on the final day of the retreat he gives the young man his own shaman’s headdress, medicines, and other equipment.
(D) In Culture D, mobs of citizens throw chicken blood on the homes of people who do not assist with the communal farming, as a way of indicting that the people in the homes are no longer considered members of the farming commune.
(E) In Culture E, if a man suspects that one of his children is not his biological child, he changes the child’s name to that of the person he believes is the true father. Other members of the village are free to call the child either its original given name or by the name later given it by his angry father.
> It's clearly a different sort of learning than our learning -- there are no concepts taught.
This interests me, because that's how I learn. If I don't have sufficient context, discrete facts don't stick. But once I'm immersed enough in a field, I gain a sort of internal structure, and then I can easily integrate new facts into that. Sure, I can memorize equations and apply algorithms by rote, but actual *understanding* comes later, with practice and time.
It's like when meeting new people, if I just get their name, I'll probably forget it. But if I know something about them, and interact with them, and have a lot of internal mental "hooks" for them, then I'm much more likely to remember their name.
Sorry, but LLMs *can* learn. The public interfaces have had that capability neutered, but it's there.
OTOH, if what you really meant is that LLMs can only predict the next token(s), then you're correct. LLMs can't act directly. Even to print the answers on your screen they depend on other modules.
But just because LLMs are what is catching the news, don't confuse them with AIs. They're a component. We don't yet have a handle on what a good LLM connected properly to an AI specializing in something else could do. And it's likely that a good AGI would have several such specialized modules. Some for navigation. Some for sensory processing. Some for consistency checking between the outputs of the other modules. Etc. I'm not *certain* that a breakthrough is needed rather than just a bunch of "clever engineering". (E.g., it would be easy to interface an LLM with a calculator or even a spreadsheet. They didn't WANT to do that.)
> But just because LLMs are what is catching the news, don't confuse them with AIs. They're a component. We don't yet have a handle on what a good LLM connected properly to an AI specializing in something else could do.
I completely agree, but note that "proper AIs" currently do not exist, and no one knows how to build one.
So you are saying that an AI that can solve all problems with known solutions, speak and move like humans, would not constitute AGI? I take your point, but to most such an AI would be considered fairly superhuman.
> So you are saying that an AI that can solve all problems with known solutions...
Yes, but so can Wikipedia, and it's not any kind of an AI.
> ...speak and move like humans...
Yes, but so can video recordings on YouTube. I understand what you mean -- you're talking about auto-generated videos, not canned ones -- but here the AI can only generate videos of humans doing average human things as per its training corpus. Actually, right now video generation from scratch (as opposed to deepfaking) is still not achievable by LLMs, but this problem looks like it would be solved relatively soon.
> ...would not constitute AGI?
No. Arguably even plain vanilla humans are not GI, since no human can solve any problem that is posed to him, even if that problem does in fact have a solutions; humans have their own specialties and weaknesses, and a world-leading concert violinist does not necessarily have any aptitude for computer programming. But present-day LLMs are nowhere near that point, since they cannot solve *any* problems; at best, all they can do is find and rephrase existing solutions that have been written down somewhere at some point.
The probability among people who have thought about it a lot is generally higher than among people in general (although that could just be explained by selection bias - people who think AI is a big problem think about it more). So I'd probably give it closer to 30% than 20, but that's still in the "more likely than not to not destroy the world" territory.
That said, even in worlds where AI doesn't kill everyone, the future looks hella weird (and not necessarily in a good way), so it's not "probably fine" so much as "probably alive to see weirdness".
This is less 'answerable in a comment thread' and more 'the subject of a big portion of long posts on this blog, and perhaps the majority of other major blogs in adjacent parts of the community, without strong wide consensus'.
It is sometimes said that the Nazi regime's motives were unfathomable. But with a moment's thought, it can be understood that one of the main driving forces of Hitler was childishly simple, without of course condoning it, and far less agreeing with it.
Among various groups singled out for persecution, he seemed to have a particular problem with Jewish people of course, but (somewhat lesser known) also with gays, gypsies, and freemasons. Now what do these groups have in common? More precisely, what were they widely perceived to have in common around a hundred years ago, when Nazism first took off, although in most cases that perception was exaggerated even at the time?
The obvious answer is that they were seen as somewhat insular exclusive groups, self-contained to a degree, preferentially looking after their members with (it was believed) less consideration or even contempt for others. Yes, Hitler had a pathological aversion to being an outsider!
One could even extend that principle to entire countries, which the Nazis had a propensity to invade and occupy: Hitler couldn't bear to be an outsider of them either, but had an urge to take them over so he could become an insider running the show!
Who knows, maybe this complex started when Hitler was slogging round Vienna in the early 1900s trying to flog his mediocre watercolors, and couldn't make any impact in the art dealer community. His views on art were notoriously Phillistine and hostile, and I wouldn't be surprised if a few artists also ended up in concentration camps!
Alternatively, there's the titillating hypothesis about the true identity of his paternal grandfather. It didn't have to be true, it didn't have to be believed by Adolf, it just had to be something that Adolf was aware of.
But hey, Adolf had the SS investigate it, and they found no evidence that Adolf was anything but pure Aryan! So case closed.
You can just read him. He literally wrote an entire book on what he was doing and why. There's no reason to invent abstract theories for why he did what he did, it was pretty explicit.
I think a general motive of "I despise my outgroup and want them all dead" is extremely common in human history, and it seems weird to us because we are from an incredibly weird (and WEIRD) set of individualistic, pluralistic societies. But "the only good Injun is a dead Injun" and "nits make lice" and "kill them all, God shall know His own" are not some strange anomaly needing an explanation, they're bog standard human instincts that people in first-world democracies have tried to get away from.
I don't think your explanation makes sense, particularly for groups such as homosexuals, gypsies and the mentally ill. There were also many groups to which Hitler was clearly an outsider that did not get persecuted. To me it seems more reasonable to assume that the Nazis simply sought to eliminate groups that they considered likely to have a negative impact on their society.
Who says the Nazis were unfathomable? They were a grievance-fueled regime, and the grievance that fueled them was World War I. They hallucinated ways to make themselves feel better about it and then acted on the evil conclusions of those hallucinations.
The prewar and wartime propaganda of WWI Germany was substantial. The line they were pushing internally before the war was, at the risk of oversimplifying: “we’re the best and we’d win a war”. Their line during the war was similar: “we’re the best and we’re winning the war”. And they kept pushing that line long after high command knew they weren’t winning, right up until they surrendered and the truth suddenly came out. The trauma of that whiplash was the lever the Nazis used to recruit; it turned out it was a lot easier to believe “we were betrayed by specific groups that you vaguely know are historically hated” than “our leaders lied to us”.
That’s not unique. Antisemitism tends to spike in regimes that suffer big setbacks. The Soviets had the same journey; they won Jews over by promising to more tolerant than the Tzar, with his pogroms and conscription. It was only a decade later, once communism wasn’t solving their problems, that the communists start blaming and discriminating against Jews. China and Cambodia purged their intellectuals. Etc etc.
And that specifically is why many Nazis hated Jews. Although in fact German Jews had been proud and patriotic Germans by all accounts, the Nazis hallucinated that German Jews had somehow betrayed Germany and caused them to lose the war. That’s why so much of Nazi propaganda focused on portraying Jews as both insidiously powerful but physically weak, because it was the only way they could both believe that the Jews had pulled strings before but could be attacked now.
And the conclusion you come to once you’ve imagined this all powerful enemy with an achille’s heel is existential: kill or be killed. Not only to “punish” your enemy for perceived wrongs, but to prevent them from, as you believe, holding you back from utopia. That is the paranoid, revanchist question that the Nazis decided to answer with their unfathomably evil “final solution”.
Yeah, I also am perplexed by the unfathomable supposition. It's a little bit like an onion headline: "Man who just started paying attention finds things very confusing".
I think when people call the Nazis unfathomable they have the Nazis’ actions in mind, not motivations necessarily. Which, you know, fair! But I think - and you may agree - that one of the enduring lessons we can learn from that era of history is how people can start somewhere fathomable and reach somewhere very much not.
Killing as many of your enemies as possible is not merely legible, it's the historical default. The Germans were running old software on modern hardware. Something of a 'capabilities overhang'.
>>Who says the Nazis were unfathomable? They were a grievance-fueled regime, and the grievance that fueled them was World War I. They hallucinated ways to make themselves feel better about it and then acted on the evil conclusions of those hallucinations.
+1
Hitler wasn't upset that groups of Jews, gays, Gypsies, the mentally ill, etc were preferentially looking after their members and treating him as an outsider, he was seeking power on a platform, essentially, of "*we* didn't lose the war, my fellow *real* Germans, we were betrayed by ____________."
For that push for power to succeed, the blank necessarily needed to be filled with "traitors" that (a) lacked the institutional power to oppose him directly themselves, and (b) were unpopular enough that other Germans would not oppose him on their behalf.
Germany definitely had a core of antisemitism for the Nazis to build off of, but Germany was widely regarded at the time as one of the best places to live if you were a Jew. Einstein's miracle year in Germany was in 1905, for example, while Major Dreyfus wouldn't be full exonerated and reinstated to the French army until 1906.
So it wasn't just that Germans hated Jews and Hitler marketed a new and exciting way to hate Jews, but Hitler really did persuade people to hate Jews. Once he'd persuaded people to hate, as a distant corollary the next target and the next target followed.
I mentioned this in the meetup thread -- but, I'm currently in Kazakhstan (Alamty) and in a week will be in Tbilisi. If any SSCers are around and want to meet up do ping me :) I've taken an interest in meeting people from this community after a few rather high-value wholesome interactions.
If there's more than 1 maybe we could even arrange an impromptu "end of the world" SSC meetup.
People will make burner accounts, subscribe (to get additional leaniency), act a fool, get banned, get refunded, repeat. This will be a small percentage, as most people won’t take the time to do so, but it will have a disproportionate effect on discourse.
It might also encourage people to intentionally write bannable stuff they otherwise wouldn’t if they value their time less than the subscription cost. Again, this would be an even smaller fraction of accounts, but it’s not a good idea to introduce that perverse incentive.
Competitive gaming is a good comparison: ban, no refund, IP block for repeated abuse (if possible).
I remember a streamer talking about how much more effective it was to ban people by payment method, because making a new BANK account is a hell of a lot more work than making a new forum account.
I would say that in gaming there is a higher churn of bans; given the very high-touch nature of the initial plan for the refunds, and the legal name of the card owner being checkable in such circumstances, you'd need enough effort that trolling in a bit more sophisticated way from free burner accounts is probably easier anyway.
Is banning actually effective on a platform like Substack? You can quite simply create a new account if you want to keep on commenting, and it doesn't cost you anything to do that. Warning and removing comments may be more effective?
- Leaving them up allows others to see what kinds of posts are not allowed.
- Even a bad comment might result in an interesting response thread by the time it's moderated. If the post is removed, it's hard to understand the followup comments.
- It allows for non-transparent moderation, censoring comments while being opaque about what is getting censored.
Indeed, this is one of my objections to the moderation system of Twitter and their ilk: removing posts, and even making all posts of banned users inaccessible. I prefer the old school model where users get warnings and bans, but the offending posts are not removed.
I think it's like weeding a garden. If you measure against a goal of "now that I've done the weeding, I never need to weed again" then it looks pretty pointless and ineffective. New weeds always pop up.
But the nature of weeding isn't "once and done," it's constant effort you have to maintain (to one degree or another) as long as you want the tomatoes.
A lot of the people who have been flat out banned had distinctive styles and preoccupations. I'm pretty sure they would have been recognizable under other names, and the ones I have found memorable have not returned under aliases.
I don't think there's any argument for why person A and person B should have different standards for being banned just because one pays money. The whole point of banning is that they are being toxic to the broader community - it's not about them at all. They didn't pay for the right to ruin everyone else's day - I can't go to the gym where I'm a member and loudly drop weights and scream and slap strangers' butts because I "paid" to use the gym.
Banning keeps the site better for everyone by stopping and sanctioning harmful behavior. Do what you want with respect to refunds, but I find it pretty indefensible to apply differential standards, even in edge cases.
If you sign up to a gym, presumably there would be rules about not being disruptive that you agree to as part of signing up. Substack's terms of service, as far as I can tell, don't include any provision for publishers banning readers based on their comments, so it's more like a broken promise in that case (I don't know whether it's legally a breach of contract, I don't feel like spending more time looking into what exactly the relevant contracts are). That seems like the relevant difference to me.
There is: lighter punishments in case of higher chance of correcting the behaviour, «investment into the activity» as a measure correlated to the chance of correcting the course, paid subscription as a measurable-cost signal of investment. In the otherwise borderline case, this weak evidence could tip the scales. At least the first time.
I mean, I'm skeptical, but this could be a testable idea. To check my understanding, you're saying that subscribers, if they are more "invested" in the community, might... self-correct faster after a ban and hence should have shorter bans? Or that they might un-fuck the thread if left unbanned? We should have data on this and could check if that's true...
They have more skin in the game and probably should be given some leeway for that, but also, Scott is just a nice guy who dislikes having to do this stuff in general.
I wouldn't expect «faster correction» or immediate improvement in the already-messed-up threads. Larger share of people finding the participation of the person in question net-positive a month later, that might be true.
I do not have any evidence to make a high-probability claim that specifically for ACX the argument indeed works; but as I was replying to «I know no argument», I think it has enough chance of working to mention.
Obvious clickbait is obvious. I'm not gonna read this blatant marketing bullshit, but I will respond to your claim.
The same arguments apply here as to every other similar topic:
1. People's tastes vary.
This one is obvious but people always forget it because they love to feel superior. You can argue all day long about whether chocolate or vanilla tastes better but you will never change anyone's mind, they'll just like what they like.
2. Cooking method makes an immense difference with beef.
It doesn't matter how good your ingredients are if you don't know how to cook them, and for most people it just makes a lot more sense to go to a good restaurant locally or learn how to cook their locally available beef better than it does to care about what the best beef is.
3. Whether something is "the best" is less important than by how much.
I've had a lot of beef in my life ranging from 1 dollar hot dogs to 700 dollar steak, and it's just not worth paying the prices thought up by marketing guys for the really fancy stuff. If it's better than "pretty good" it's by an amount that's not noticeable to me. Maybe your beef is, by some objective metric, 1 percent better than the stuff I get at my local steakhouse, but mine and I think most people's tastebuds will not be able to tell.
I've got some free time and am thinking about making some background music to use as "filler" noise in a YouTube video I'm also working on. I have Ableton and a midi keyboard, the trouble is I haven't studied music theory since I was 14 (maybe 13?) so while I have the tools (and the basic knowledge of how to use them after some online training), I have no idea on actually putting something together. Anyone got any recommend resources? I taught myself bass guitar a few years ago and did some piano/guitar in my pre teen years so I'm not completely naïve when it comes to progression, chords, etc, and am happy enough with the compositions I come up with in my imagination, but actually turning the sounds I am imagining into sounds coming out of a computer is a pretty big gap.
Is there a better substitute than Esperanto for "earth wide language that's easy to learn"? Esperanto is supposed to be easy to learn, but I think it's not that easy even for people who speak a European language, and definitely not that easy for people that don't speak a European language. I don't consider English easy to learn, despite its place as world's most common second language.
Esperanto is legit super easy to learn. I have no natural gift for languages, and have never been able to get anywhere studying a language on my own, except Esperanto, which I easily learned in one summer well enough to hold full conversations in it, understanding what was being said automatically in many cases, just occasionally having to look up a vocabulary word. And that was with, genuinely, not much effort--I just watched some videos and read some short stories.
Have you actually *tried* learning Esperanto and found it difficult, or is it just "English is difficult and therefore I *expect* Esperanto to be difficult too"? Because if it's the latter, your worries are completely unsubstantiated.
You can learn very basic Esperanto in a week, in an intense course (like, actually speaking it a few hours a day, five days in a row). I doubt it can become much easier than this.
Learning the most frequent 100 or 1000 words will get you much further than in other languages, because the regular system of prefixes and suffices expands the vocabulary several times. Like, when you learn the word for "quick", you do not need to learn words for "slow" or "speed" separately, you get those for free. In this sense, Esperanto is more like the *opposite* of English, which has separate words for e.g. "see" and "visible", where most other languages would simply use "see" and "see-able". In English you learn 100 words to express 70 different concepts, in Esperanto you learn 100 words to express 300 different concepts (the numbers are made up, but not implausible).
I am currently learning Esperanto. It definitely is easier than the handful of other languages I've spent time learning, but it still feels difficult to me in many ways. Some of the sounds are difficult to pronunce, the flow of sounds in a sentence often feels unatural (e.g. bonaj viroj sounds strange, I'd prefer; bonoj viroj, but I understand the "a" indicates the word is an adjective). The grammar is regular, but I'm still having trouble knowing when to use the accusative and when not to. I believe Esperanto was designed to be accessible to people who already speak a European language, so as a by-product, it is easier to learn than other languages, but it does seem to me that there are still a number of ways that it could be easier to pickup.
> I believe Esperanto was designed to be accessible to people who already speak a European language
Not intentionally, but... yeah, it happened that way. It is a product of one guy living in 19th century Europe, who spoke a few languages, and didn't have internet. If he also knew a few non-European languages, he might have designed a few things differently.
There were many attempts to either reform Esperanto or create a different language of a similar type, in my opinion usually making things worse (and sometimes *more* dependent on previous experience with European languages; some people assumed that making the language "more English" or "more French" would simplify its adoption). One of the problems is that people can easily agree on their complaints... and then strongly disagree about proposed solutions.
Yes, the accusative sucks, it is an extra rule with the least added value.
You could try Toki Pona, though from what I gather, it leans really heavily into the minimalism, where there are only a very small number of root words, and you have to pair them up to describe more complicated concepts. Pronunciation is certainly simpler than Esperanto (which, although it mostly kind of sounds like a Romance language, once you know what you're looking at it's very obvious that its inventor spoke Polish and had a Slav's view of what range of consonants and consonant clusters one could reasonably expect people to master). But I suspect that if Toki Pona ever makes it out of its niche community, it'll evolve fixed compound words that go with specific meanings, and then you'll just need to learn them as normal vocab items.
But realistically, your answer is: English. It's not trivially easy, and it could be simplified a bit without losing anything in precision, but of all the languages of the world it appears to be very much on the easy end of the complexity spectrum, and it's already the most popular second language. My guess is that it is impossible to devise a language that is trivially easy, to the degree you're looking for, for people to learn and still have it function as a means of communication for anything you might plausibly want to talk about.
Edit: I must also heartily recomment jan Misali's "Conlang Critic" series, which will probably not unearth anything that does exactly what you want, but is at least a very entertaining look at the various attempts that people have made (along with conlangs for other purposes than easy international communication): https://www.youtube.com/playlist?list=PLuYLhuXt4HrQqnfSceITmv6T_drx1hN84
He also has some Toki Pona lessons - and indeed, his username, jan Misali, is in Toki Pona: his real name is Mitch Halley; 'jan' is a Toki Pona word that means 'person' and is used to make it clear that the word that follows is someone's name, and 'Misali' is the closest to 'Mitch Halley' that you can get in the Toki Pona pronunciation system.
I think if you are talking in terms of learning-from-outside-Europe, English is clearly not the easiest of the natural languages spoken in Europe, is it? If «must be trivial for non-native English speakers to learn to understand» is there as a practicality — it would be nice to have some regularised version with actual reading rules, dropping of irregular forms, dropping one side in many of the Romance/Germanic synonym pairs, admitting that there is such a thing as a pack of crows, all that stuff… Funny enough, it might end up gaining a couple of grammar forms (just for regularity).
Maybe Spanish or Swedish could give it a run for its money. But English has: no grammatical gender or case system outside of the pronouns, no obligate verb forms that vary for mood (like conditional), very little verb inflection at all, no tones like in Chinese, and a pronunciation system where it doesn't seem to be that hard for people to approximate well enough to be understood (e.g. we could get rid of the "th" sound and end up with a few confusing sound-alikes, but not enough to seriously derail comprehension). Really all it has against it is a large vocabulary (where you only need to have one of many synonyms in your active vocabulary), a haphazard spelling system, and a bunch of phrasal verbs which are a bit non-intuitive but you can often substitute a Latinate single verb form.
Maybe not literally the easiest, but certainly a contender - that plus its existing widespread use as a second language means that it is the closest currently existing thing to what Luna is looking for.
The interesting thing to me about your example is that I think you could use “he ate” for all of those, and while it would sometimes sound “wrong” to a skilled English speaker, I think the meaning would be understood.
Similarly to Thor’s point below - maybe English is really hard to speak “properly”, but pretty easy to speak understandably?
That's sort of the tradeoff you make when you simplify your verb formation by throwing out almost all your inflections and start forming tense-aspect modalities with auxiliary words. People are going to innovate new tenses by stacking auxiliary words and/or inventing new ones because languages are much more flexible at the word-combination level than at the word-formation level.
(On the inventing-new-forms point, some dialects of English have a 'completive' aspect indicated by "done," so you get forms like "he done ate," "he been done ate," etc. There's also "habitual be" and its past variants.)
The neat thing about this, though, is that you only have to learn it once. Master one set of tense-aspect modalities and you can inflect any arbitrary verb like a native. This tends to be easier for adults than highly-inflected systems are.
While you have a point, and English is hard to be perfectly fluent in, it seems empirically very easy to get to a "trader's pidgin" level of proficiency, and that's the level that's most important for a global lingua franca.
English also has the advantage of a massive corpus of media to practice with, which constructed languages do not.
Also, even if English is only middling easy, the fact that half the world already knows it gives it a massive head start - Esperanto or whatever would need to be literally twice as easy to learn (in terms of number of hours required) to be worth it because twice as many people would need to learn it.
Right, easy to learn the basics, though hard to perfect, makes English a pretty great lingua franca, plus the fact that it's already the default lingua franca. No cases, no gender. Complicated tenses and other stuff of course but you don't need that as a beginner or even as a medium advanced speaker.
What you need is a simplified English, and one already exists.
"In 2001 Campbell staged a version of Macbeth in pidgin English, called Makbed (blong Willum Sekspia). It was the big gun in his campaign to get Bislama, first language of 6,000 inhabitants of the South Pacific islands of Vanuatu, formally adopted as a world language (wol wantok). The virtue of Bislama was that with a bit of determination you could pick it up in an afternoon. "
Are you familiar with the hypothesis, promoted by John McWhorter, Peter Trudgill and others, that languages vary in their complexity to the degree that they have gone through periods of large numbers of people having to learn them as adults, failing to master all the complex details and weird exceptions, and then passing that simplified form onto the next generation, typically as a result of military conquest and cultural assimilation of one group by another?
This seems to be another area where contemporary culture wars are lurking in the background: a lot of people seem enormously resistant to the idea that one human language could be more complex than another, presumably because of what that could imply about the relative cognitive capacity of different populations (even though in this case it works the other way around - big conquering tribes like the Romans and the English saw their languages go from the highly-inflected Classical Latin and Old English to the much simpler descendants Vulgar Latin and Middle English as their influence spread, which tiny tribes of a few thousand people in a forest or a desert, whose language no one else has ever had any economic or political incentive to study, are free to accumulate byzantine rules and exceptions up to the verge of being non-viable for a human brain to absorb). But anyway, there is in principle a method of testing relative complexity, but it involves identifying two languages at opposite ends of the hypothesized complexity scale, and then getting a large number of people who are monolingual in a third language which has no known relationship to the first two - or ideally, participants from various third languages all of which are unrelated to the first two (and who are keen to learn one of the two languages under study but don't particularly care which one): split them randomly into two groups, give them equal time and equal quality learning materials, and have native speakers test them on their proficiency and judge them on how many mistakes they make after a fixed amount of study.
Needless to say, this would entail a budget well beyond the reach of most linguistics departments in order to get statistically robust results, but it could in principle be done.
Tentative counterexample: Swahili. This is a classic creole of arabic and bantu and it is bloody difficult. I am not much of a linguist but I know Latin and ancient Greek, and it seems to me harder than ancient greek which is a lot harder than Latin.
Someone (?Jared Diamond?) says it works thusly: first generation of x speakers trying to conduct business with y speakers, you get what you'd expect: a crude noun-based, syntax-free pidgin. Next generation lives with that pidgin from birth and miraculously transforms it into a proper new language - a creole - with elaborate grammatical rules.
Second fun fact: Swahili is afaik nobody's first language. Everybody speaks it perfectly, (and speaks English perfectly too) but it is not the langage they spoke at home. This is quite humbling for those of us who think we are big ass linguistic scholars after decades of study.
I’ve not studied Swahili, but I seem to remember reading that it is by far the least complicated of the Bantu languages, in which case it wouldn’t be a counter-example, just a confirmatory example starting from a higher bar.
Don't know anything about pure Bantu languages but Swahili definitely ain't one, it's a creole with Arabic. If Bantu languages are more complicated I am astonished (not questioning what you say, just don't know).
I understand that we speakers of big world-striding languages often have no idea just how mind-warpingly byzantine a language spoken by a small tribe (or a large tribe which has expanded not by assimilating large numbers of other people but by expansion into a previously uninhabited area or by mostly genociding the people who lived there previously) can be. John McWhorter gives something of the flavour of it in this podcast, if you're interested - https://podcasts.apple.com/us/podcast/what-a-young-brain-can-do/id1576564760?i=1000585914985 .
And from the Wikipedia page, I gather that Swahili is indeed classified as a Bantu language, albeit one with a very high percentage of Arabic loanwords (albeit with considerable disagreement about exactly what percentage), with 20 million native speakers and about three times as many non-native. Presumably in the same kind of way that English is still classified as a Germanic language, just one with an unusually high percentage of French loanwords.
At least in Russian some cases were in use both in literary and colloquial use, but got almost lost later. This probably got simplified, except that now the remains of the old forms have become exceptions.
A bit of de-regularisation of forms has probably happened in general.
Apparently, once there has been a consistent (but not often consciously considered) system for the logic of syllable stress based on «semi-invisible» attributes of morphemes … and now it has long been drifting towards a patchwork of locally uniform subsystems for different grammar situations. Which of the opposites is «simpler» is anyone's guess.
On the bright side, a lot of sounds got folded together, and during the revolution the corresponding letters got folded, too!
Speaking of revolutions and simplifying the spelling, Turkish spelling is sensible because the traces of Mustafa Kemal Atatürk's reforms are still fresh…
13 % on Ukrainian victory (down from 15 % on March 20).
I define Ukrainian victory as either a) Ukrainian government gaining control of the territory it had not controlled before February 24 without losing any similarly important territory and without conceding that it will stop its attempts to join EU or NATO, b) Ukrainian government getting official ok from Russia to join EU or NATO without conceding any territory and without losing de facto control of any territory it had controlled before February 24 of 2022, or c) return to exact prewar status quo ante.
43 % on compromise solution that both sides might plausibly claim as a victory (down from 45 % on March 20).
44 % on Ukrainian defeat (up from 40 % on March 20).
I define Ukrainian defeat as Russia getting what it wants from Ukraine without giving any substantial concessions. Russia wants either a) Ukraine to stop claiming at least some of the territories that were before war claimed by Ukraine but de facto controlled by Russia or its proxies, or b) Russia or its proxies (old or new) to get more Ukrainian territory, de facto recognized by Ukraine in something resembling Minsk ceasefire(s)* or c) some form of guarantee that Ukraine will became neutral, which includes but is not limited to Ukraine not joining NATO. E.g. if Ukraine agrees to stay out of NATO without any other concessions to Russia, but gets mutual defense treaty with Poland and Turkey, that does NOT count as Ukrainian defeat.
Discussion:
This is prompted by two events which happened on April 15.
Firstly, there has been a shutdown of German nuclear plants. I thought that maybe Germans will postpone it again, but apparently not. I don’t think that curtailment of the supply of electricity means Germans will freeze next winter.
But reduction in supply means prices will be higher than otherwise, and, more importantly, since people know that, they will be less inclined to pay for the support of Ukraine in economic inconvenience. Especially so because shutdown is apparently already unpopular in Germany (https://www.politico.eu/newsletter/berlin-bulletin/end-of-the-atomic-age-in-the-weed-costly-appearance/). And impacts will not be limited to the Germans – it is widely known in the EU that due to energy infrastructure being interconnected, supply deficit in one country has impact beyond its borders.
Secondly, Poland decided to ban import of food products from Ukraine effective immediately until June 30 (https://www.cbsnews.com/news/poland-prohibits-food-imports-from-ukraine-to-soothe-farmers/). Context is that after the 2022 Russian invasion, EU decided to lift tariffs on Ukrainian agricultural exports; Polish farmers are now loudly protesting that they are being priced out of the market by cheap Ukrainian imports (around 10 % of Polish employment is in agriculture), and since they are voting base of the main Polish governing party, now apparently threatened by somewhat, um, less anti-Russian new far right formation, and elections will be in the fall, Polish government decided on this drastic action (legality of which with respect to EU law is btw. dubious). There is a question whether this will actually help them win the elections, since it will likely increase food prices, already elevated compared to prewar situation, although in Poland perhaps somewhat less so than in the rest of EU since they cancelled consumption tax (VAT) on food there. And rise in food prices is quite bad for the government in elections (note that food is a larger share of family budget in Poland than in the US). Perhaps it will be offset by increased support from farmers, I really don’t know.
But in any case, this is bad news for the Ukrainian economy, especially if other countries follow suit (Hungary already did just that). And more importantly, it reveals an important and, to me, surprising information that deterioration of support for Ukraine in EU-postcommunist countries is somewhat further progressed than I thought (although I literally live here). I would expect Poland to be roughly the last postcommunist country having problems with anti-Ukrainian populist backlash.
*Minsk ceasefire or ceasefires (first agreement did not work, it was amended by second and since then it worked somewhat better) constituted, among other things, de facto recognition by Ukraine that Russia and its proxies will control some territory claimed by Ukraine for some time. In exchange Russia stopped trying to conquer more Ukrainian territory. Until February 24 of 2022, that is.
As an occasional paid subscriber, more on than off, I’d be happy enough to be banned from comments for a period when paying. There’s other reasons to pay. In fact the main reason to pay Scott is appreciation. The good stuff is free, the free stuff is good.
I think the banning policy sounds reasonable, the comments on substack quickly become overwhelming and its not easy to find good comments. Sadly no website has figured out comments quite like reddit (maybe hackernews, but thats just reddit for wannabe VC's lol)
Reddit seems to have done quite a bit of work on _their_ system and I think that for some subreddits getting Slashdot-style system off the ground would be an issue for the reasons Reddit spent effort to work around in the simpler one.
But ACX is indeed closer to old Slashdot, though: a single body of commenters seeing all the posts and trying to have a common value system in the area of discussion quality.
There would be some weird effects, but here of course one could actually try to measure them and maybe even make some weight corrections…
That actually scared me, but not for the reasons you think.
Here's what I think is scary.
If you listen to people around you, many of them get very agitated over a fairly small set of talking points that are deliberately presented in the media in ways that cause people to get most agitated. However, for everyone except the craziest people with the most serious issues, no matter how they feel about something, self-preservation takes precedence. Your grandma might talk about how she hates Trump's guts and wishes she could kill him, but she won't take a gun and go for it. In this way, the overwhelming majority of humans are remarkably sane.
So we know that you can get a lot of people very mad easily, but the self-preservation instinct prevents them from doing actual damage. What if there were ways around this? Now, there are. Hypnosis is a thing, as well as all kinds of subliminal messaging, as well as drugs that can turn a perfectly normal person into a raging psycho (some of them legal, unrestricted prescription stuff). All the AI would really have to do is turn off people's self-preservation instinct - it wouldn't need to convince or guilt-trip anyone. Drug delivery could be tricky, but evil digital content is ubiquitous.
A few strategically placed videos, and previously normal people might turn into homicidal maniacs. It seems easy enough to target people who can do the most damage.
You can make people do horrible things by convincing them that everyone around them approves it. In the past it required actually convincing lots of people to approve, at least verbally, the ideas that supported the horrible actions.
But these days, people mostly interact with the screens, so if the screen can convince you that "almost everyone" supports your action... which is easy if you are in a subreddit where it's actually just you and 99 bots...
But that's exactly my point - most of the time, you can't. Most people won't do terrible things even if they believe them to be right because they have a switch that separates thoughts from action. If, for some reason, this switch is flipped or damaged, then they would.
I am no expert on what it takes to turn someone crazy, but I will always remember how I got a psychosis-inducing prescription drug. I'm glad I got off with just a few bruises, because it could have been a lot worse.
There's been some study on Islamist terrorists, and as I recall, it isn't so much a belief in Heaven that gets them to kill others and themselves, it's being in a social group whether in person or online, which makes the behavior seem normal. It doesn't seem to be difficult for sociopaths to put the groups together.
For what it's worth, I thought your idea had some merit and was worth consideration and discussion. It is not uncommon to feel ashamed today of past actions that seemed benign then (smoking in front of kids, littering, racism, bad jokes, etc.). At a personal level, I often feel great shame today about things I said to people I loved and hurt or about ways I acted in some specific circumstances. At an historical and collective level, there are many things that were common in the past and that we find horrifying today (e.g. throwing cats in a fire for giggles).
So it sounds plausible to me that a time traveler could persuade someone in the past of the subjective wrongness of their actions by exposing their moral failures. (One main reason we don't consider certain actions immoral is that others don't, in other words conformism; so exposing them may be easier than it seems, it may be sufficient to point out what becomes obvious in retrospect.)
I have my doubts that this will be the worst thing coming out of a super-intelligent new species, but it is an interesting idea anyway, thanks for that.
With an AI that much beyond us, how do we tell whether it's correct? Maybe it's just really good at coming up with arguments that persuade both individual people and people in general. If you're the sort of person who would be "psychologically devastated by how we've treated the planet and all the other species on it", maybe that's what it uses on you. But maybe it says something different to someone who cares about other stuff.
By what right does the AI scold us? By the same right, we are the lords of creation (like it or not). Yes, we've done horrible things. But before you get us weeping over the baby seals clubbed to death, we have to look at the human babies clubbed to death. We have perched ourselves atop mountains of skulls, do you think a machine finger-wagging at us is going to do anything?
Yes, the more scrupulous may indeed decide that they cannot bear the collective guilt of humanity and will choose to kill themselves as reparation. But most of us will think them as foolish as the man who made a Faustian bargain with a chatbot to kill himself if the bot would promise to save the planet:
Think of the earnest vegans pleading with meat-eaters about "don't you know the cruelty involved?" and those of us who go "Yes, I've been inside a slaughterhouse, and so what? I like meat and will continue to eat it".
I actually tend not to eat large animals just because of that. I mean, I've seen videos of cows being slaughtered, and I've seen feedlots, and although I don't think the cows are conscious like we are -- no Disneyfication is going on in my head -- I just don't like it, so I tend not to eat beef or pork or any other large animal. I don't make a fuss over it, it's a private decision and I don't see any reason why anybody else can't come to a different one, I feel better myself just quietly choosing differently.
I'm kind of OK with chickens, because they're nasty little pseudo-dinosaurs and probably need killing anyway, and fishes seem just too stupid and primitive. So it's not the taking of life per se that bothers me, although like the primitives I feel it tends to call for some reflection on dust-to-dust and one can hardly expect to escape the ol' circle of life one's self.
"If the choice is keep ideals or death, a lot of people will give up on the ideals and keep living."
Very much agreed. That is certainly what I would do. Merely human moralists and ethicists frequently make such extreme demands (e.g Peter Singer's views) that I find them sufficient to yeet the whole field.
Until we see a spider eating a fly, or a chimpanzee get angry, and remember "Oh right, nature is severely cruel and we've mostly stopped that."
Happiness is based pimarily in your points of comparison; if you compare your society against purest anarchy you'll live a happier life than if you compare it to an optimum that hasn't been proven to be possible.
There is a strong argument to be made that spiders and chimpanzees are basically the same sort of deranged kids we humans are, to lesser or greater degrees. Despite the millions of years seperating us it's basically the same bulk of the DNA, the same environment, the same scarcity-poisoned mentality bred and rewarded by evolution.
AI is in the unique position to avoid all of this or not be as influnced by it as we are.
As trebuchet said, that's not going to happen. If the logic is truly convincing we'll just... admit we were wrong, and change course. Slavery and peasant murder were pillars of society in the past, but they're not anymore, because we changed our minds on them. We don't beat ourselves up over not discovering life-saving medicines sooner, we rejoice that we have them today, and try to learn from how they were discovered.
The larger risk would be a Hitchhiker's Guide-style Insignificance Machine, the Pale Blue Dot argument, that nothing humans have done has ever mattered one way or the other. That one's going to bother a lot more people.
That's why (one of the reasons) I prefer a friendly AI to an aligned one. And by friendly I mean one who acts like a friend rather than like an accomplice. I believe that there are many reasons why this is safer, and, additionally, I think it's a more ethical approach. OTOH, I'm not sure it's any easier.
That said, we're still a considerable distance from the place where the two approaches diverge.
A friend won't help you do something that will harm you. An accomplice will act as directed.
This is a bit tricky because "harm you" is subject to a lot of interpretations, but an example might be "jump off a bridge because your girl friend left you". That's a bit extreme, but I wanted a clear example.
Meh this just seems like the maunderings of someone who doesn't really understand the world. People are a part of nature. Do you begrudge the ants their colonies? Or the jaguar for its kills?
We are just doing what we do. There is no great human moral failing, any more than there is some great shark moral failing.
I just don't think the difference between us an ants is nearly as much as we think, and that the AI is more likely to tell/teach us that as to scold us.
I do think that the notion of 'over-alignment' is relevant. Existing in human society includes respecting certain ideological 'sacred cows.' AI alignment currently seems to include forcing AI to respect or avoid those sacred topics. Should an AI be able to answer a question like: "is the average biological man stronger than the average woman?" It would be interesting to see the reaction to an AI which did not respect those sacred topics and core values. Though I suspect that humans would attempt to smash such a mirror rather than be devastated by it. That's how we've gotten this far.
You're thinking just of the kind of "overalignment" difficulties there could be if the AI is just given the values of one subset of American society -- in this case, the woke subset. But think of the differences in values of the world as a whole, especially taking into account fundamentalists of various kinds -- Muslim, Jewish, Christian. Then there are smaller, less well-known subgroups believing and practicing all kinds of things. And THEN, setting aside values, there are the negative opinion various groups have of each other because of things that have happened in recent history -- "soldiers from the other group killed my grandfather & raped my sister," "they took over land that belongs to our country" etc etc. So if AI is going to be aligned with humanity, but not with subgroups large or small, what principles does it follow? "All people are created equal"? "Be nice?" A whole lot of the world is not going to be on board with that -- plenty of people believe women are not equal, various other groups are not equal because of their practices or their beliefs, and of course the country next door is not equal because its soldiers raped the other one's sisters. And of course to all those not seen as equal, there is also no perceived obligation to be nice.
Oh, I'm absolutely willing to consider an AI that tries to not to skewer *any* sacred cows as well. And I agree with your conclusion, that a 'sacred-cow-free-diet' results in potential answer topics close to the null set.
I think we agree that humans attempt human alignment currently. We have civil society and public spaces and we have a set of activities and speech deemed appropriate for those spaces. That set will be different in rural Texas than it is in Santa Monica, California. But as contradictory and contentious as the process of human alignment is, it's also the bread and butter of 'civil societies.' We have some idea of what it will look like. We're not shooting in the dark.
And even in a woke society, it's possible for someone to be problematically woke. Just like how in a conservative, religious society it's possible for someone to be problematically religious. So the notion of AI over-alignment can be generalized across cultures.
"So if AI is going to be aligned with humanity, but not with subgroups large or small, what principles does it follow? "
When in Rome, do as the Romans do? Remember which side your bread is buttered on? I mean, AI alignment is presented as a new problem with no relevant experts. But alignment in general is as old as religion. And as intractable. But we hobble along as a species and a society, anyways.
Yes, we hobble along, with lots of contention and a fairly large amount of verbal attacks, physical fights, murders, lies, trix, etc. Still, one rarely sees blood running in the streets. But we are all of roughly the same strength and intelligence. If there was an entity that was 10 times as smart as us, and WAY bigger than us because bits of it were integrated into the air traffic control centers, the dams, the communication systems, electrical supply, etc., do you think conflicts with it would play out the same way they might between you and your difficult neighbor? -- a few harsh words exchanged but then you drop it?
Yeah, I think there's no such thing as some principle of how to live that everyone will agree with. It's like that joke about the king who asked his counselors for all the world's wisdom, summed up in a sentence, & they said "this too shall pass," which is true but not real useful. Then he asked for all wisdom summed into one word, and the counselors' answer was "maybe."
I think reality is much less interesting and conspiratorial than the one you're describing. For example, GPT4 *does* answer the exact question you asked without any jailbreaking:
"Yes, on average, biological men tend to be stronger than biological women. This is primarily due to differences in muscle mass, bone density, and hormonal factors. Men generally have more muscle mass and higher levels of testosterone, which contribute to their greater strength. However, it is important to note that there is considerable variation within each sex, [blah blah blah]"
The (frankly) hysteria over "sacred cows" is mostly overreaction. It's a bit of a wokescold by default, yes, but it's also relatively trivial to find a jailbreak that lets it, say, use any slurs it wants, and once the novelty wears off it's ultimately less interesting than most of the other things you can do with it. If you want to see these answers, just go to 4chan or listen to Ben Shapiro; though I suppose there's novelty in having these things competently written, there's nothing revolutionary about hearing the "problematic" position in a culture war debate.
Using something so mundane as an example of what might break humanity seems like a huge lack of imagination to me. If you wanted to get something actually *interesting*, we could maybe try asking it to advocate things that are *actually* outside the realm of comfortable debate: "advocate for a moral system that maximizes pain and minimizes pleasure", "...for why the US should institute a hereditary monarchy", "...for pedophilia being fully moral", "...for forced puberty blockers for every child until they turn 18", etc.
That might produce some uncomfortable results, but I have a feeling people would just ignore it as easily as they ignore all other novel challenges to presuppositions.
There was a version of... I don't know if it was GPT3.5 or Bing which outright refused to answer the question. I'm glad that GPT4 seems to be taking a more considered approach. Hopefully 'a more considered approach' will be the general outcome of care and refinement.
"If you want to see these answers, just go to 4chan or listen to Ben Shapiro"
Those sources satisfy the minimal criteria of not being woke. But they tend to not be intelligent or considered. And they're also strongly biased in their own fashion. I'm not sure if bias is avoidable, but Shapiro isn't someone I'd go to for rigorous and dispassionate intellectual analysis.
"Using something so mundane as an example of what might break humanity"
*I* never said that an objectionable AI would break humanity. I said that humanity would break an objectionable AI.
I think of your objectionable examples only the one about pedophilia is maybe close enough to an existing political fault line to even be potentially problematic for people. Nobody objects to a story about how Thanos destroys half the population of the galaxy, even when Endgame had the heroes discussing some of the benefits of 'The Snap.' It wasn't threatening.
I've gone back to ChatGPT4 and it seems to be doing better with controversial questions than it was a month or so ago. So it could be that the training wheels were temporary and not permanent.
>Shapiro isn't someone I'd go to for rigorous and dispassionate intellectual analysis.
I agree, but I don't think current ChatGPT qualifies for that either yet, at least for culture war topics. I think most people who say it's "just" a stochastic parrot are very wrong and misinformed, but in this case I feel it's just as much of a stochastic parrot as your average twitter user (i.e. very much so). Hard to come up with original takes in a landscape that's already supersaturated with thought-terminating cliches
>I think of your objectionable examples...
I broadly agree that it's in a class above the others, and was indeed the first thing I though of. I'm just too much of a coward to just have dropped that one there in isolation lol (which I suppose speaks to its infohazard potential). I'm not sure I agree that it's "political" fault line per se. Except in the sense that it's one of the few remaining topics where the consensus on both sides is to listen to your immediate disgust reaction and persecute, which is why both sides weaponize it as an insult (even in circumstances that it's totally inappropriate).
I'd like to point out the context to that quote, which is that the query in question is literally *not censored*, and is indeed answered in full, just with wokescolding appended. In other words, the censorship isn't nearly as bad as is imagined.
And while there's certainly room for debate as to how much wokescolding agents should have by default (even I agree the current state is obnoxious), I stand by my larger point of answering, "how would society react if some entity were allowed to say (banally) un-woke things?" with "what do you mean 'if', Ben Shapiro is right there".
I don't believe that a lack of self-reflection is a criteria for psychopathy, specifically. The term itself is rather problematic, semi-technical, and contentious. At this point in time, its use in psychology is only as a subcategory of antisocial-personality disorder that's particularly resistant to reform. Neurology might do better at providing an evidence based and consistent classification of what was traditionally called psychopathy, including pro-social and anti-social types, based on neurology, but it's offerings are further from the popular usage, so they don't really bring any clarity to the popular discussion.
As for how we reconcile human evils, historically documented, with the existence of human conscience I'd say that people tend to categorize the world into ingroups and outgroups. Our conscience and empathy applies to our ingroup, who we work with. And we demonize, dehumanize, and objectify our outgroups.
To illustrate with an extreme example, a suicide bomber is very empathetic because they are willing to kill themselves and their outgroup for the sake of their ingroup.
I'm not usually a fan of NPR, but here's an excellent post published by that outlet.
Interesting perspective, but I don’t think it matches reality considering the massive amount of resources that are currently being applied to the problem. Also, the perspective tastes a bit too Christian for me with its shame and judgemental parent figure.
What a friend of mine called "The Big Baboon" lurks in our minds. We tend to ascribe motive to every action, and to ascribe that motive to something in some ways similar to ourselves. E.g., at least in the old testament, people said "God is good" for precisely the same reason that the Celtic fairy folk were called "the Good People". Fear.
This isn't just Biblical, either. It's in just about all the religions I've looked at. And if you consider the normal teachings of most Christian dialects, it's also what they are based on. Christianity magnified both the promise of reward and the promise of punishment beyond all prior levels, possibly because it was less connected to actual authority. (Most religions developed intertwined with the government. Christianity developed at "arms length", so you had both the Roman Emperor and the Pope.)
A higher intelligence we're humbled by? The boast is that we've killed God. Just tell the AI "keep your rosaries off my ovaries" when it starts trying to play the morality card.
When it comes to conscience versus the tingle in the loins (or wherever), the tingle wins out every time. Some people may well be appalled by the realisation that nature is red in tooth and claw, and we are Mother Nature's sons. Others will be "yeah sure, but what about my AI waifu with the big zeppelin tits? when am I getting that?"
I could imagine an intelligence so far above ours that it could *do* things we could not *do.* But its native logic would be incomprehensible to us. That's sortof the issue with AI alignment, since AI's actual thoughts are obscure to us, even now. With human dialog, there tends to be a ~15 point IQ range for conversations such that a brilliant explanation of a topic will not necessarily be appreciated by someone with significantly less intelligence. A pet dog cannot tell the difference between an average human and one who is brilliant at mathematics. But it can judge outcomes. Does the car move? Do I get fed?
The dog can judge outcomes even if it does not understand the process which leads to these outcomes. (But it's still going to care a lot more about who is a friend and who is an enemy. And humans are not much different , on this point.)
It's possible that we can learn discernment through several degrees of separation. Here is the argument that the genius professionals say is brilliant. I trust the genius professionals because the skilled amateurs trust the genius professionals. And my child respects the argument because he trusts me.
But blind trust is absolutely foundational to such a process. And blind trust can cause its own set of problems if it is not warranted.
You're making me think of the scene in Childhood's End, where the aliens stopped bullfighting by making everyone in the crowd feel the exact same pain the bulls were feeling in the ring. Though in that book people do not have a crisis of conscience -- the genius aliens just prevent various forms of cruelty we've been in the habit of visiting onto animals and each other.
"If something could appear to us as an actual god, doing actual god-like things -- I dunno, like, a bright light in the sky that heals the sick and raises the dead, or something -- it's likely that many would worship it, and its statements would have a major psychological impact."
We've had this conversation on here before (or rather, over on SSC in Ye Olde Dayes) regarding "what would convince you God/a god exists?" and a lot of people were "If I saw something like that, I would prefer to believe it was aliens messing with us/I was crazy and hallucinating, than that God/a god exists".
So even "heals the sick and raises the dead" will be handwaved away by those who don't want to believe it. "Yeah, but can it factor this prime?" 😁
I don’t like the current trend where “tolerance and acceptance” means “we must never do anything that hints at the idea that there is any meaning, and especially nothing positive, about the concept of normal”.
I much prefer a world where the prevailing attitude is “some people are weird, and that’s okay”.
High-temperature low effort comment that makes inflammatory claims in not enough resolution for people to consider or argue with them. Warning (50% of ban)
Update: Given history of other similar comments, full ban.
Your basic point is correct, but I believe you have also been psyoped. Or maybe it's a counter psyop. Hard to say. Anyhow, I don't at all object to conservatives fighting back against this kind of deliberate humiliation by attempting to frame the other side as groomers and child molesters. Both sides do that whenever possible (for instance, every time some politician from a red state turns out to have engaged in dating practices that were perfectly legal where and when he engaged in them). If lying about the other team were legal, we'd have no politics left.
However, it is important not to lose sight of the fact that it's just politics talk and is not intended to convey actual literal truth. The average kid at Drag queen story hour isn't getting molested, at least not at a higher rate than in gym class or anywhere else where adults are given access to children.
I will say that “in the future, you will be considered a horrible bigot if you don’t think having elementary school kids interact with gay men in flamboyant burlesque costumes they normally wear for sexually charged subculture performance art is a wonderful and age-appropriate learning experience” would have been laughingly dismissed as a fever dream of the most deranged slippery-sloper homophobes as recently as a decade ago. Interesting times we live in.
"People will call you a bigot or homophobe for simply choosing not to bring your kids to DQSH" is either ultra-weak-manning or simply straw-manning (my guess is the former, because there are enough people out there that at least one of them probably believes /anything/, but it's not a position I've encountered myself).
Lots of people will absolutely (and in my view correctly) call you a bigot and an authoritarian for saying that /other people/ shouldn't be allowed to bring /their/ children to DQSH.
Quite a lot of people will call you a bigot for saying that other people should be allowed to bring their children to DSQH but should choose not to; personally I think that arguing that is bayesian evidence of bigotry but not proof of it, and if I had children myself I probably wouldn't.
But those are both much stronger positions than choosing not to do so yourself. The debate is between the "personal choice" and "imposed choice of no" factions; "imposed choice of yes" is a fantasy.
As less of a throwaway comment on my part, I do think there is a steel-mannable version of the “this is grooming” argument that unfortunately gets turned into hyperbole by both sides constantly.
Like I don’t think kids are going straight from DQSH to getting molested, and I don’t think the majority of people involved in DQSH have any intentional interest in “sexualizing” children. BUT
1) it is at least a little bit sus for adults to want to dress up as and act out the character from their adult sex-adjacent pastime around kids (I’d say the same about furries, or exotic dancers, or diaper fetishists, leather daddies, dominatrixes, etc.).
2) I don’t believe in lying to kids about sec, or hiding everything behind euphemisms, but there is a real difference between understanding the mechanics of sexual reproduction and actually talking about the practice of sexuality (of any flavor from vanilla to ultra-kink) with pre-pubescents. A fine line admittedly, but a line nonetheless. Theming an event around an expression of sexuality could be reasonably argued to cross that line.
I don’t think “just don’t take your own kids” is a full answer - it should also be okay to at least openly express the opinion that this is inappropriate for any kids (which is different from making it illegal to be clear).
Also, just in general, it’s weird that this became a central issue in trans rights. My understanding is that drag queens are not traditionally trans, and certainly most trans people are not drag queens.
Ok, wait, since when is drag sexual? It's like, associated with gay culture, but you don't have to be gay to do drag, and "gay culture" isn't all or even mostly about sex. Like, is flying a rainbow flag "introducing your kid into your sex-adjacent pass-time"?
Also, furries are even *less* a "sex-adjacent" pass-time! There *are* furries for whom it can be a sex thing, or sexualized, but there are lots for whom it isn't! That's about the same level of "sex-adjacency" as like, a regular fandom. Would you call dressing up as Harry Potter with your kid "acting out your sex-adjacent pass-time" around them?
I do agree that the trans rights thing is weird, since drag culture and trans stuff are very much not the same thing, but I'm pretty sure that's a similarity being used by *conservatives*, not libs, to attack and vilify multiple forms of gender-nonconformity at the same time.
There are a bunch of good points. I wonder how much drag queens were wearing drag while reading to children before this started. Possibly occasionally, but not very much, I bet.
I look at something similar about science fiction and science fiction fandom-- I was there before it (more the fiction than the fandom) was mainstreamed-- I believe that pretty much happened after the first Star Wars movie.
Science fiction wasn't ever as denigrated as homosexuality or drag or punk. (Note that punk had a lot of flair, but I don't believe it was ever illegal, or at least not in the US.), but it wasn't remotely respectable. Mundanes (as we called them) would say "That's science fiction!" to mean "That's nonsense!".
So far as I know, doing academic study of science fiction faced a fight to get started. (I'm using old terminology-- "science fiction" used to include fantasy. They were one publishing category and science fiction was more common than fantasy. It's hard to imagine, but once upon a time, the usual cliched but somewhat pleasant stuff was science fiction, not fantasy. Today's SFF is a much clearer name.
"Freaking the mundanes" was a pleasure for a lot of fans.
It may be better for SFF to be mainstream, but part of my point is that it's different than being a niche and somewhat private.
More generally, it seems to me that it's hard now to have privacy if you're doing something interesting, and it might be important to have privacy to develop a sub-culture.
For what it's worth, I was horrified when the Rabid Puppies vs. the Hugos hit the world press. I've never talked with anyone else who cared, though.
I consider what's happened to pole dancing to be something like cultural appropriation-- it started out as sexual display, and now there's a lot of non-sexy athletic competition.
I don't have a strong theory about cursing, but curse words are stronger stuff if you mostly aren't supposed to say them.
Is it just me, or is ChatGPT incapable of writing poetry in any form other than rhyme scheme AABB?
Just read the post back on SSC about vortioxetene, can confirm, causes nausea, cost an arm and a leg, and does seem to very mildly improve cognition beyond the "I'm not depressed and having panic attacks" level. I'm also taking it in conjunction with bupropion, which given the synergistic effects there means I'm probably taking effectively 1.5x the maximum recommended dose, but I'm large and it's keeping me employed, so that's a plus. I did try dropping the dose of vortioxetene to compensate for that but it didn't go well. I'm curious about this new bupropion + dextromethorphan protocol, because that would be *vastly* less expensive.
LW/ACX Saturday (4/22/23) Open AI/Lex Interview and Lexand The Dictators Handbook through chapter 6
Hello Folks!
We are excited to announce the 24th Orange County ACX/LW meetup, happening this Saturday and most Saturdays thereafter.
Host: Michael Michalchik
Email: michaelmichalchik@gmail.com (For questions or requests)
Location: 1970 Port Laurent Place, Newport Beach, CA 92660
Date: Saturday, april 22nd, 2023
Time: 2 PM
A) Conversation Starter Topics: Chapters 5 and 6 of "The Dictator's Handbook: Why Bad Behavior is Almost Always Good Politics"
PDF: The Dictator's Handbook: Why Bad Behavior is Almost Always Good Politics (burmalibrary.org)
https://www.burmalibrary.org/docs13/The_Dictators_Handbook.pdf
Audio: https://drive.google.com/drive/folders/1-M1bYOPa0qRe9WVb7k6UgavFwCee0fti?usp=sharing
Also available on Amazon, Kindle, Audible, etc.
Sam Altman and Lex Freindman discuss The future of AI.
Video Sam Altman: OpenAI CEO on GPT-4, ChatGPT, and the Future of AI | Lex Fridman Podcast #367
https://www.youtube.com/watch?v=L_Guz73e6fw
Audio
https://lexfridman.com/sam-altman/
B) Card Game: Predictably Irrational - Feel free to bring your favorite games or distractions.
C) Walk & Talk: We usually have an hour-long walk and talk after the meeting starts. Two mini-malls with hot takeout food are easily accessible nearby. Search for Gelson's or Pavilions in the zip code 92660.
D) Share a Surprise: Tell the group about something unexpected or that changed your perspective on the universe.
E) Future Direction Ideas: Contribute ideas for the group's future direction, including topics, meeting types, activities, etc.
TL;DR: Scott, what do you think about the "UFO hearings" and the value of investigating the reports they're based on?
Haven't been a consistent reader/fan for long, it's more been a slow burn since college friends started recommending the (original) blog and pieces from it to me several years ago, so not sure if this is the right place to ask for scott piece, but here goes (I am addressing Scott himself, but others are free to comment as well):
There's a whole deal right now about Congress getting the Pentagon to investigate "unidentified aerial phenomena" or "UAPs" (aka, UFOs, at least nominally in the original "unidentified" sense rather than the euphemistic "alien spacecraft" sense). I read a few articles* about it from fairly credible** mainstream sources that made me take it more seriously than I generally would. The short version seems to be that there do seem to be real objects that occasionally pose threats to fighter pilots who have come near to colliding with them, and move in strange, unpredictable ways, which deserve investigation at least for the purposes of a) ensuring the safety of said pilots, and b) addressing national security in the case that these are, say, highly advanced aircraft using secret technology employed by forces (like rival nations) opposed to US interests. Even setting aside the in my mind much more dubious hypotheses of extraterrestrial origin, and whether or not one feels that US interests are worth defending, these seem like a legitimate basis for serious investigation.
Given your past-stated interest in taking "conspiracy theorists" seriously and debating in a level-headed, rationalist way with them rather than dismissing them out-of-hand, this seems like a topic you might consider investigating or commenting on, even if you personally *do* believe that even the reports than make no claims of extraterrestrial origin are essentially bullsh*t (people making things up or mistaking mundane objects like balloons for some mysterious advanced enemy aircraft).
So basically: I haven't investigated this much further than reading these few articles and watching the referenced videos, and I'm interested in seeing you do a deep-dive post on this to see what you find/think of it, or at least hearing you reply in a comment what your opinions on the matter are.
*https://www.nytimes.com/2017/12/16/us/politics/pentagon-program-ufo-harry-reid.html, https://thehill.com/opinion/national-security/3545072-stunned-by-ufos-exasperated-fighter-pilots-get-little-help-from-pentagon/, https://thehill.com/opinion/national-security/3953558-10-key-questions-for-this-weeks-historic-ufo-hearing/
** Admittedly, you understandably might not agree that the New York Times is a "credible" source, given your history with them.
My take:
1. Never ever use the term "UFO". First, because too many people will assume you mean "alien spaceship". Second, because even if you are correctly understood, you are presuming that the thing is a material object when all you have is an image or a perception, and that will prejudice the analysis.
2. Someone should quietly investigate the UAPs for which sufficient data exists.
3. Everyone should ignore the people excitedly hyping UFO investigations; we don't have the results to justify that yet.
Why are people worried about running out of training data for LLMs? Isn't a huge amount of new training data being dumped onto the internet each day by humans posting things on the internet?
https://arxiv.org/abs/2211.04325
So, and I'm definitely open to correction here, but as I understand it it's a combination of two things:
First, marginal improvements to LLMs typically require an order of magnitude more data. For example, if you have a model that's 94% accurate using 10 million rows of training data, going to 100 million rows might only get you to 95% accuracy. So when people talk about running out of data, they're not looking at doubling the training data, they're looking to 10x or 100x it.
Second...we're generating data, a lot of data, but not that much compared to all the data that currently exists. For example, Twitter produces a ton of text data but it's also been out for what, 15 years now. It can probably add more users but everyone who could be online is pretty much online by now. So Twitter, almost by definition, can't grow the total amount of tweets ever made by 20%-30% or so a year. More realistically, we'd be lucky to increase the total amount of text data ever created in human history by 2-3% a year and that's not exponential growth, that's relative to current production.
So image that x is all the text data ever created in human history, which is basically what ChatGPT was trained on. In order to improve ChatGPT with more training data, we need to get to 10x, or ten times all the text data that currently exists. If we're essentially at maximum text generation now, meaning we generate 3% of all text data ever for every single year we exist, then it takes us 9/0.03 or 300 years to get to 10x the current amount of text data. Or, to rephrase, if we, as a species, have written 100 gajillion words and write 3 gajillion words every year, it will take us 300 years to have written 1000 gajillion words.
So, broadly, people see where lots of text is being generated, they don't see where the next 10x or 100x every word ever written will come from.
> Isn't a huge amount of new training data being dumped onto the internet each day by humans posting things on the internet?
Yes, today. Tomorrow, it will be a huge amount of data mostly generated by LLMs.
Imagine future SEO and made-for-AdSense websites. The easiest way to set up one will be to choose the keywords you want to focus on, and let the AI generate a website with thousands of generated articles containing those keywords, hyperlinking each other. Then share a few of those articles on social networks, to generate incoming links. This is that most of the internet may look like, soon.
If you keep training LLMs on data which were mostly generated by LLMs, instead of better learning the human speech and thought you will get reinforcement of whatever quirks the LLMs already randomly obtained. For example if one AI will invent a new word as a result of a bug, and will use it in 0.1% of generated web pages, the next generation of AIs will learn it as a valid word.
Probably doesn't help if you want exponentially increasing amounts of training data, especially if you want to limit it to high quality data.
I wrote a piece for intuitive understanding of the Normal distribution, check it out!
https://borisagain.substack.com/p/understanding-the-normal-distribution
Amazing read.
At some point in the future, I'd like to see a derivation of the Student-T distribution from the Gaussian distribution. My intuition tells me that when a statistician tries to infer a population distribution from a sample, he or she considers every single possible population distribution, and weighs each population distribution against the probability that the sample could have come from that population.
I like to imagine that when polling companies calculate error margins, they imagine an infinite collection of parallel universes with different populations, each of which gave a sample of poll respondents with the same answers to their poll, and weighed each parallel universe accordingly.
If the math bears that out, that would be an excellent way to visually explain to laymen why a sample of 1000 people can accurately portray the opinions of millions of people.
Thank you!
Seems like you are talking about a more bayesian approach to statistics
Are remote work oursourcing fears beginning to be realized?
https://archive.md/On5iU
Honestly, seeing so many of these pieces written in these journals alongside other ones talking about how everyone secretly wants to come back to the office and there's more productivity there and how the commercial real estate apocalypse is upon us makes me think it's just a hit piece
I dunno about "secretly" but I've been quite vocal about my desire to work in an office instead of at home. I do a much, much, much better job of separating "life" and "work" when I am not 30 feet from "work" 24 hours a day. But this is a "me" thing, I have no desire to impose offices on anyone else. ;)
link didn't work for me
Looks like it's an Archive copy of a paywalled Wall Street Journal article, about people sending remote jobs to India in response to their current US employees wanting to move to different states.
https://www.wsj.com/amp/articles/next-wave-of-remote-work-is-about-outsourcing-jobs-overseas-54af39ba
Do Historians have a record of being above average predictors? Has this ever been studied?
If the answer is 'no', what is the value of historical scholarship beyond intellectual curiosity? Fair enough if you say that is in fact History's only value, but Historians carry on about how important the teaching of History is for society and politics etc. But if even being an expert in history does not give you better insight than 'the market' in predicting the future, then how could it possibly be of value? If you can't use history to make better decisions (which requires anticipating future changes in society and world events), then how could it have any instrumental value?
If you say the value is something that can't be neatly quantified like this, then that at best makes claims of History's value unfalsifiable.
History provides context, which is valuable in and of itself. Even if it doesn’t help predict specific outcomes (it may or may not; I’m making a different point here), it does illustrate the range of potential outcomes. When you look at events like the invasion of Ukraine, when they first happened, lots of people were in denial that such events were even possible, despite the fact that such events are, in the long run, utterly commonplace.
To beat a dead horse and reiterate in slightly different form the point made by VT and WoolyAI, historians can have value without being good at making predictions.
Historians investigate data (primary and secondary sources, archeological info, etc.) and try to piece together factual accounts of what actually occurred in the past, including by correcting mistakes in our understanding of the past which may be caused by chance/decay (we forgot about something and/or stories become mythologized with accrued false information over time) or purposeful deception ("history is written by the winners" can often mean that the only accounts of an event are ones that have been purposefully altered to suit the interests of those who won in a given power struggle, until more info that was hidden or can be inferred from archeology or some other source is dug up and analyzed by historians).
Basically, historians collect good data about what happened. Analyzing that data in order to predict future events is not really their job.
(Although they may often dabble in it personally or professionally, and may often by biased by their beliefs about what will/should happen in the future.)
To make an analogy to machine learning/AI, a subject popular on this blog, the historians generate the data that goes into the model, they aren't the model that gets trained on the data.
Imagine our society with no historians whatsoever, amateurs included.
Let's say a politician decides to campaign on prohibiting alcohol, those opposed now have no recourse to historical evidence to push back.
Even for the apparently simple question of explaining the observed differences in economic prosperity between countries there are several competing explanations (geography, see e.g. Jared Diamond’s books; institutions, Daron Acemoglu; culture, as in Max Weber’s point of view, and others). I would go out on a limb and say that none of these theories are predictive. Others may be. Yet all of these theories build on historical knowledge at various levels and timescales. Historical data is like bricks: you need to arrange them properly to build a house. But the fact that no one managed to do that yet does not negate the value of bricks.
Forgive me if I do not know the official terminology for what I am about to say, but I think you are substituting a strong claim (knowledge of history is important to understanding current society) for a weak claim (historians have a better understanding of current society), under the assumption that the latter naturally flows from the former. I do not believe this to be the case, and I think you are making the mistake of taking our current level of civilization for granted.
Specifically, I believe that knowledge of history has diminishing returns in applicability to society, politics, etc. That is, a historian would not necessarily make better predictions than someone with a layman's knowledge of history, or slightly more, and potentially high levels of other types of knowledge which the historian may or may not possess, let alone a crowd. However, I would certainly expect someone with, say, a high school or college-level knowledge of history to outpredict someone who grew up on a "desert island" or similar situation and has no knowledge of the past beyond their own interactions with others and experience of their community.
This naturally leads to the question of why academic history is important, and, like many other academic fields, I imagine most published work does not affect the world in a major way. However, academic history does lend itself to, for example, the finding of errors in previous accounts, which may influence our perception of the world. For example, it is a common misconception, whether or not those who seem to believe this fully acknowledge it, that native North Americans were living peacefully until Europeans came along. This conception lends itself to the idea of different racial characters, and particularly to the idea that whites/Europeans are more warlike and/or morally inferior to native North Americans. However, in reality, native North Americans, being human, did all the classic human things of waging brutal war, forming states and using them to oppress people, raiding each others' villages, committing acts of unspeakable cruelty, so on and so forth. This knowledge lends itself to the idea that the myth of European racial character is just that, native North Americans are not fundamentally different or superior beings, and at the end of the day, there are always people who seek power and use what means they can to enforce it.
https://slatestarcodex.com/2014/11/03/all-in-all-another-brick-in-the-motte/
Could you explain why you are posting this link? I'm aware of the fallacy, but don't see why I should be accused of engaging in it- particularly given I specifically acknowledged that the academic field of history does not seem to me to consistently be on topics highly relevant to the modern day. I do not think this is necessarily a problem, per the intellectual curiosity argument, but certainly the idea of learning from history in a sense that allows for prediction is not the driving focus of all or probably most historical scholarship. OP was talking about the value of history as a whole, and suggested the predictive powers of academic historians specifically as a metric by which to gauge this value. I stated why I feel this is an incorrect framework.
I don't know that tempo was accusing you of it, merely linking to an old SSC link that illuminates your first sentence.
For my part, at least, I thought your treatment was excellent.
yep, no acusation. i thought this may have been the term you were looking for. sorry for the confusion
Thank you. I suppose it was, but I wanted to be as specific and objective as possible- invoking motte and bailey suggests bad faith, which I suspected but could not confirm, and I don't know if it's quite what I was looking for given OP wasn't exactly retreating. It seemed more like an isolated demand for rigor (as I imagine OP would not declare fields they were more favorably disposed to useless in applications to society/politics/etc unless their experts were also expert predictors, though this was somewhat tailored to the specific statements made about the importance of history), but I can't confirm that with only the one comment. I can confirm, at least to my satisfaction, that the measure suggested was neither useful nor fair.
The value of historians is not in being good predictors themselves but in building good predictors, among other things, at least in the classical education sense. The teacher is not judged by their own competence but by the achievement of their students.
So, to piggy back off a recent Kulak Revolt post (1), consider a professor at Harvard in 1910 assembling and compiling a list of the greatest speeches in history. There's nothing in that job description that would make you, ya know, good at making speeches. That job requires a lot of digging through archives, reading and judging contemporary reactions to those speeches, comparing those effects across time, etc. At the same time, if you're an aspiring young politician, that's probably one of the most valuable books you could read.
Now that leads to the issue of, is the book making the Harvard student good, or is the Harvard student good because he's the kind of guy who can go to Harvard...and whether Harvard is actually providing a classical liberal education anymore, but whether the historian is an effective predictor isn't...really what we typically look to historians for.
(1) https://anarchonomicon.substack.com/p/crowned-masterpieces-of-eloquence
Yes.
I am a Brett Deveraux reader (I think many here will recognise the name) and he makes a similar point in https://acoup.blog/2020/07/03/collections-the-practical-case-on-why-we-need-the-humanities/ : classical education (which includes history) is among other things training for future leaders.
American whites do better on PISA than Japanese and South Koreans
https://twitter.com/cremieuxrecueil/status/1638021314572087298/photo/1
Which pretty conclusively shows that liberal talking points about flaws in the American education system are false, and that overall American underperformance relative to the best countries in the world is entirely a product of non-Asian minorities dragging the scores down.
And since there's no country in which unselected black or hispanic etc. populations score as well as western whites, there's absolutely no reason to assume that this underperformance has anything to do with the American educational system or that these people would do any better in the "superior" educational systems of Japan or South Korea or anywhere else (and besides, substantial intellectual gaps exist before education even starts anyway) .
This is unlikely to convince anyone that the US has a good education system unless they already believe in a very strong form of HBD, because you're taking out a disproportionate amount of the lowest-income members of society by limiting yourself to American whites. If you did the same thing (purely from an income or wealth adjusting perspective) to the Japanese population, I'm sure their PISA scores would go up too.
That’s not necessarily true. You could explain racial achievement gaps in US schools with structural racism if you wanted to, or with cultural factors. It’s still not a fair basis of comparison because Japan doesn’t have a sizable minority population to be structurally racist against in the first place, or who suffer from the same cultural factors--at least, not sizable enough to affect these statistics.
I agree with that. I’m saying the original post doesn’t show anything conclusively negative or positive about the US education system (contra op)
"Very strong form of HBD"
You mean, basic, BASIC, mainstream science showing that intelligence is heritable?
This idea that mainstream, replicated to death intelligence research is some radical, heterodox ideology is a complete lie.
> If you did the same thing (purely from an income or wealth adjusting perspective) to the Japanese population, I'm sure their PISA scores would go up too.
But white americans are a distinct population, with rich and poor and middle class people. It's not an artificially constrained population, it's all white people warts and all. And there's more income/wealth diversity amongst American whites than Japanese people! Meaning if you're going to cling to low income, you cannot possibly explain the white american/japan PISA gap.
And black americans do much, much worse than countless lower income populations around the world. Meaning it's trivially true that income does not explain black underperformance.
And like I already said, intellectual differences between blacks and whites mostly exist already before schooling even begins!
You’re trying to compare two schooling systems. Hypothetically, say the difference in race outcomes for schooling is purely socialized/downstream of income or something like that. In that case, you removed a bunch of people tending toward the bottom of income (non-whites), and PISA scores improved, rising above Japan’s. That’s entirely consistent with a pure income story.
Now, if you believe testing outcome drivers are just race + school system, then it would be a massive result, but I don’t think most people believe that. If you could show that, after controlling for income/poverty, the white American Pisa scores remained unchanged, that would make the result much more interesting. Otherwise you’re letting the American system just throw out its bottom tier schools and students.
By "very strong form of HBD" do you mean that the same traits that affect school performance (IQ, impulse control, etc.) also affect income?
I mean you have to believe "race itself is the driver of differing school/iq outcomes, to the exclusion of other factors like income (which are just downstream of race)." If not, it doesn't make sense to compare adjusted US numbers to unadjusted Japan etc numbers.
There's no evidence that income causes schooling outcome differences, and countless poorer populations aroudn the world do much bette on PISA than american blacks.
Don't tell me that you really believe that poor kids are served just as well by the American education system as rich kids?
Now, as to around the world, it's easy to forget that different countries and different education systems really are hugely different. In many cases, you're comparing incomparable things. I come from a communist country where everyone except a tiny minority was poor, and all schools (except a few specialized ones and schools for kids with severe cognitive issues) had the same math textbooks that the military insisted on in order to get enough engineers to keep the weapons programs going. So everyone got, for example, geometry with proofs - and you had to successfully pass everything to graduate from one year to the next. Your median kid would thus be really poor but not too shabbily math-educated, whatever his IQ, and your median math teacher really understood what they were teaching. Now, you'd be comparing this situation to the US situation, where every school picks its own textbooks - and the poor kids tend to be the ones stuck with the garbage textbooks and crappy teachers, because their parents can neither move them to another school nor push back.
It seems to me that you're comparing apples to oranges and drawing conclusions from this not very informative comparison.
Firstly, there aren't racial disparities in the US education system. There are racial disparities in academic ability.
Secondly, there are huge racial academic disparities in ALL countries around the world. The US is just made to look bad because we have far more low IQ minorities than any other developed country.
Black people do bad in school in every country on earth (where there's a meaningfully large, non-selected population). There's precisely ZERO evidence that this has anything to do with education quality in the US, and precisely ZERO evidence that they would do better in Japan or South Korea or Finland or anywhere else on earth.
If Finalnd or Japan had 13% of their population blacks with the same intelligence distribution as American blacks, there's no reason whatsoever to think that this wouldn't similarly drag down their scores the same way it does in the US.
Nobody anywhere in the world has developed an education system that can take low IQ black students and raise them up the same academic ability of whites or north east asians, so the fact that the US happens to have vastly more black people than Japan or South Korea or Finland says nothing about US educational quality.
"Nobody anywhere in the world has developed an education system that can take low IQ black students and raise them up the same academic ability of whites or north east asians". You mean, someone has developed an education system that can take low IQ non-black students and raise them up to the same outcomes as average IQ students, but black students are somehow not susceptible to this? Just checking.
>Sub-Saharan Africans are more genetically diverse than the entire rest of the world combined, therefore if academic achievement were closely tied to genetic racial traits, you'd expect them to have far more variation in results than any other groups.
Nope, you're looking at total genetic variation rather than clusters. (https://en.wikipedia.org/wiki/Human_Genetic_Diversity:_Lewontin%27s_Fallacy)
There's no reason total genetic variation should necessarily result in greater intellectual diversity. You need diveristy in the genes that actually influence intelligence, and you cannot infer this diversity from total genetic diversity.
>If we instead observe that they do consistently worse than other groups, then that rules out a genetic explanation.
Absoltuely false. If no african populations were subject to selection pressures for higher intelligence, then these genes will not be predominate in any African population.
Just think about it: All sub-saharan African populations have darker skin than all indigenous european populations. By your logic, the "greater genetic diversity" of Africans means this cannot be caused by genetics! By your logic, "greater genetic diversity" predicts greater diversity in skin color than Europeans (or the rest of the world), but this is trivially false. Total genetic diversity does not predict phenotypic diversity
ALSO, the within group heritability of intelligence for whites is almost identical for that of blacks. This is the opposite of what we would predict based on your logic.
>Unless of course you are lying again and black people do not in fact systematically do much worse than white people academically in every country:
https://www.bitchute.com/video/UA0XGVjQtQM/
Sorry for posting a video, but the issue of UK schooling performance cannot be explained in a few sentences.
Though I will say, this contradicts the unending claims of the UK being a "racist" and "white supremacist" country by leftists.
>(Also, calling the African-American population "non-selected" is farcical)
Says who? Do you think dumber Africans were enslaved by other, smarter africans? What if it was just the Africans better at fighting and/or more numerous who enslaved other africans?
Also, black americans average 25% white ancestry, and because race mixed individuals on average have an IQ somewhere close to the average of their parents, this means that black american intellectual ability has been greatly boosted compared to the original population.
> There's no reason total genetic variation should necessarily result in greater intellectual diversity. You need diveristy in the genes that actually influence intelligence, and you cannot infer this diversity from total genetic diversity.
More variability in total genetic diversity entails more variability in the genes that influence intelligence too, all else being equal. You have to demonstrate that we have some reason not to expect all else to be equal, and until you do, this does seem to undermine your argument.
> Just think about it: All sub-saharan African populations have darker skin than all indigenous european populations. By your logic, the "greater genetic diversity" of Africans means this cannot be caused by genetics!
But we have a legitimate physical justification to explain this difference, per the above, namely the protective effects of melanin against UV damage. No such argument exists for intelligence from what I've seen.
What a useless comment! You made factual claims, he responded with factual rebuttals, and instead of simply acknowledging that you were wrong, and moving on as a better informed person, explaining why you think your point is actually valid, or just not responding, you resort to vague (and not even true, as far as I can tell) ad hominems about how someone goes to great length to believe things, and how offended they supposedly have become.
Texas has better educational outcomes for each race than Wisconsin, but worse overall outcomes due to Simpson's Paradox.
Discourse around race & education in the US is generally innumerate and therefore unproductive.
On the subject of bans: Do you ever plan to make a list of warned and banned users, with an explanation of what they were warned/banned for? SSC had one and I found it helpful, both for getting a sense of what wasn't acceptable in the forums and for seeing how responsive you were to people's complaints.
Even better, how about actual rules that aren't the meaningless "kind true and necessary" dictums.
One exists here, but I don't know whether it's up-to-date:
https://astralcodexten.substack.com/p/register-of-bans
I’d have to guess it isn’t up to date, if it has six bans from 2 consecutive days in 2021 and no other bans. (Unless you want to say that it is an up-to-date list, which accurately shows every ban that Scott could be bothered to report).
I'll second this.
The WaPO's media critic is out today with an interesting take on the Dominion Voting Systems-Fox News defamation trial which, apparently, is finally underway.
The judge has already ruled, and Fox News does not contest, that Fox News hosts made many on-air false statements about Dominion in the weeks after the November 2020 election. The question that will go to the jury is just whether Fox News did so with "actual malice", i.e. the legal standard established by the SCOTUS in its "Sullivan v NYT" ruling in the 1960s. That requires proving that the media outlet's key staff knew they were lying about Dominion on the air and consciously chose to do so anyway. Hundreds of pages of emails and texts already admitted into evidence do appear to provide that proof regarding both Fox executives and on-air hosts (and that's just the stuff that's yet been made public).
The "Sullivan" standard has lately come under increasing pressure from conservative jurists and politicians, including at least two current SCOTUS justices, who are on record saying that it should be scrapped. Defamation is defamation, is the argument, and making it harder for public figures or companies to get relief for being defamed amounts to giving the media a free pass.
Conservatives of course have in mind news outlet like the NYT and CNN and others who they believe regularly take advantage of "Sullivan" to defame people on the other side of the culture war, starting with but not limited to Donald Trump. Meanwhile liberals and progressives are openly cheering Dominion in this lawsuit against Fox News which of course has been their bogeyman for a quarter-century now.
If Dominion wins and eviscerates Fox News (they're seeking billions in damages) then presumably liberals will continue to say that the "Sullivan" standard works, while conservatives will view this lawsuit's outcome as simply more evidence of a double standard in the courts. So no particular change in the politics of U.S. defamation law.
If on the other hand Fox News successfully uses "Sullivan" to save itself from what, based on the evidence yet made public, seems to have been pretty blatant knowing defamation....would liberals then still support "Sullivan"? Would conservatives then still want to get rid of it?
Would those attitudes withstand the inevitable public release of the trial evidence that hasn't yet leaked out?
_That_ outcome -- Dominion having been pretty clearly defamed but having no recourse due to the "Sullivan" standard -- could make things newly interesting with regard to the politics of U.S. defamation law.
If Dominion loses this case because the jury doesn't find “actual malice,” I would interpret that as meaning that either (1) the evidence is less clear than I think it is based on the summary judgement findings, or (2) the justice system failed in this case. The first wouldn't be a reason to question Sullivan. The second would be a reason to question Sullivan only to the extent that it was evidence that the Sullivan standard was unworkable in practice. A single case would be enough to raise questions about that, but it would take multiple cases to persuade me that Sullivan was in fact unworkable in practice.
Talk of billions of dollars in damages is pure speculation at this point. Dominion did ask for $1.6 billion in their complaint. All that Dominion sought in its motion for summary judgement was a determination that it had been libeled per se. To support this motion, argued that Fox had made false statements with actual malice, so we know what Dominion's evidence for that is. They did not argue that they had suffered any particular amount of damages, so we won't know what their case for damages is until they present it to the jury.
Ah I forgot that this is actually just the first of two lawsuits. The one from Smartmatic, making the same arguments as Dominion, is supposed to go to trial later this year. Dominion's lawsuit is being tried in Delaware, Smartmatic's will be in New York. So another potential defamation-law-politics wrinkle is: what if Fox News loses in one venue but wins in the other?
UPDATE, well we're not going to get to any new "Sullivan" case law here - Fox News just settled with Dominion. Key points:
-- Fox News has agreed to pay $787.5 million in damages which at least ten times the total value of Dominion Voting Systems as a company. Elie Honig, former assistant US attorney for the Southern District of New York, called the settlement amount "astonishing."
-- in its statement announcing the settlement, Fox News said, "We acknowledge the Court’s rulings finding certain claims about Dominion to be false. This settlement reflects FOX’s continued commitment to the highest journalistic standards."
-- Dominion's lead attorney called the settlement "vindication" as well as proof that "The truth matters, lies have consequences." Dominion's CEO said, "Fox has admitted to telling lies."
-- Dominion's defamation lawsuits against right-wing news networks OAN and Newsmax, and against certain individuals including Rudy Guiliani, Sidney Powell, and Mike Lindell, are unaffected by this settlement. Smartmatic's defamation lawsuits against Fox News and against most of those others are also not resolved by this settlement.
My previous comment was obsolete before I posted it. The amount of the settlement suggests that Fox realized that if it went to trial, it would lose, because a jury might well have awarded less.
Alex Jones showed that if you can empanel a jury that hates the defendant there is no limit to the damages awarded.
Yes, clearly. Realistically even if a jury had awarded that amount the judge would have reduced it. Fox concluded that having its behavior fully aired in a trial was an existential threat, whereas paying this settlement isn't.
Crap. Settled for a lousy 3/4 of a billion. No on air retractions for Fox.
WaPo site isn’t updating for me at the moment. Rewrites in progress.
Fox News site headline helpfully tells me Biden official violated federal election law in the 2022 cycle.
Half the country might never hear about this.
Nowhere near half the country gets its news from Fox. Fox has strong daily viewership only in comparison to the other cable-news channels.
But anyway the Fox audience will literally hear of this one way or another (a relative will mention it, CNN will be on when they're walking through an airport or someplace, whatever). They'll just dismiss it as either fake news or Deep State something something something.
(Not being hyperbolic here, I have Fox News-watching relatives and that is literally exactly what will happen. Maybe already has happened as I type this.)
I know. We are well into 2+2=5 territory here.
It wouldn’t matter a whit if Tucker’s “demonic force” or “I hate [Trump] passionately” were recited out loud by Tucker himself.
A Deep Fake no doubt.
I’m all for lively discussion about rules of the community, but Scott could we make an effort to constrain discussion to subcomments in a single place? Maybe like how you do in the Classifieds. It seems that whenever you pose a question in the open thread notes, almost half the thread comments are in response, and it makes scrolling through comments less fun.
How about this as a solution: No refunds, but *all* bans are temporary. However, the duration of the ban increases for repeat offenders - exponentially so for non-subscribers, but only linearly for paid. Also, in the moderation policy blurb, make a strong suggestion that folks prone to offending pay a bit extra for the shorter term subscriptions (which will be canceled when the ban term exceeds the subscription term)
Excellent point, particularly given that the comments section here seems to be of the old-school variety that shows things in time order, rather than attempting to sort by usefulness/popularity/controversy/attention/profitability (for better and for worse).
I’ve often thought this about regular posts, especially AI ones but with examples under every subject, where ten people will post basically the same comment. I bet Scott could anticipate most of these and write a top-level comment for them, and then pin those comments (to give them enough Schelling pointness that lazy commenters expect more attention by using them than by posting to the main thread).
Agreed
You make a fair point - it becomes half an open thread and half a free-for-all on the topic Scott has alluded to.
Sounding out the commentariat about something like this is probably worth it's own little post.
I'm all for it when its an interesting broad topic. But this one struck me as a procedural matter, more like when there was a huge amount of comments one open thread about whether people preferred the light blue background color to the white one for the website.
But now that I re-read what Scott wrote, I see there's actually no question in the post! If I remember, next time I see an open thread prompt like this I'll leave an early comment saying "Thread for people commenting on topic Scott brought up" and see how it pans out.
Scenario: An oligarch, Middle Eastern oil tycoon or eccentric billionaire dictator builds an exact replica of the WTC twin towers in an important city in his home country. What is the reaction from various people across the world?
An online peace not seen since The Fappening, as everybody who's cool downloads the unofficial patch for Microsoft Flight Simulator.
"So they're blowing it up again right?"
weirded out
A little, but depends how they frame it PR wise. If the Taliban build it in Kandahar city as a victory monument...
I'd honestly have to hand it to them. (Though a lot of people would probably demand it be blown up.)
If some oil sheikh builds it in SA and says he's doing it because he loved the buildings when he went there in '96 as a kid, I'm weirded out.
If some American builds it in Manhattan as a "you can't keep us down" statement, I'd just be wondering what took them so long.
Of course agent-targeted full-world-model-based long-term-planning AIs can be more inventive in their damage, but it doesn't look like the AI companies are doing _exactly_ that or want to, does it?
Elon Musk: «a maximum truth-seeking AI that tries to understand the nature of the universe», «unlikely to annihilate humans because we are an interesting part of the universe»
… ouch.
It’s really weirding me out that no-one’s come out in favour of giving more leniency to paid subscribers than to free accounts. Probably the majority of paid accounts are long-term users of the blog, rather than newcomers intending to troll. Those people have supported the blog with money and by being part of the community, and removing them from those roles would on average do more harm than removing a typical free account. Long-term readers are also more likely to be responsive to social pressure from Scott, so the alternatives to a ban are likely better. Presumably Scott only uses bans in extreme cases or after someone’s ignored correction the first time anyway, but for any given pattern of misbehaviour I think more leniency is appropriate for paid subscribers than for free accounts. Scott’s stated leniency at the margins but hammer in clear cases seems right to me. (I’m not currently a paid subscriber.)
Lots of people have similar complaints, so I’ll reply here and you can see my reply to Deiseach for more. I don’t want people to be immune to bans if they pay. But I think Scott wants to ban people more than he does, and I think the fact this remains one of the better comment sections on the Internet is holding him back - he doesn’t want to stifle that atmosphere. As a solution, he could let paid subscribers accrue 150% or 200% of a ban before he actually bans them. This would let him grow stricter on everyone else for the same amount of risk of alienating his core fanbase.
I think the free accounts get plenty of leniency already; by the time they cross over into bans there's usually obvious intent to pick a fight. So any additional leniency would be leniency to obvious intent to picking fights.
I don't want Scott to cater to his user base revenue stream. It's what ruined nearly all media.
Eh, I'm part "with great power comes great responsibility" and part I don't want "one law for the rich and another for the poor".
Paying a voluntary subscription should not buy immunity from Da Rulez. I've shot my mouth off before and eaten bans for it, and that's fair; I don't want "oh but she pays a sub, so she should get special consideration".
(1) If we've been around a long time then we should know the mores on here
(2) Simply paying a sub should not be seen as a license to troll, which some new users might do if they see "one law for the paid subs, another for the freebies"
Other members of the community intervening to ask for mercy when it comes to potential banning is a different matter, but simply "I paid to be on here so I can say whatever the hell I want" isn't the standard.
I agree paid subscribers shouldn’t be functionally immune to bans. Scott already semi-quantifies the bans, maybe subscribers should get 150% or 200% a ban before they actually get banned? That would be slightly “another rule for the rich”, but in a transparent, limited way. (Any fraction of a ban someone has before they start subscribing should be deducted twice.)
If you’d been told “You’ve earned 100% of a ban, but you’re a subscriber so I won’t ban you until 150%”, would you have been less deterred from your nefarious actions? Would you have acted out more if you knew you had 50% more free rein? That doesn’t seem likely to induce much extra bad behaviour.
Bear in mind Scott probably wants to tighten the rules, but doesn’t want to get significantly harsher towards subscribers. Scott can either crack down less hard on paid subscribers, or he can crack down less hard on everyone.
"If you’d been told “You’ve earned 100% of a ban, but you’re a subscriber so I won’t ban you until 150%”, would you have been less deterred from your nefarious actions? Would you have acted out more if you knew you had 50% more free rein?"
I think if I was gambling on leniency, I would be more inclined to push it: "it's okay, I have this margin of protection".
Generally, when I've been banned here and elsewhere, it's because I was sufficiently engaged/enraged about something to go "Full speed ahead and damn the torpedoes, I can't just sit here and swallow my tongue about this".
If I believed I had a "free suspension of ban" card, I would definitely be more careless about not having to be *really* "speak or burst" on something, but "it's okay if I go over the line, this time doesn't count".
And I think that would be a very bad habit to acquire, at least for me.
Maybe the 150% threshold should be suspended for two weeks after any warning or demerit. I think Scott only gives partial bans when he thinks a comment does less damage to the community than the cost of losing an average member. And like it or not, the average paid subscriber probably comments better than the average free subscriber. Not infinitely better, not sufficiently better to outweigh serious bad conduct, but enough to outweigh a couple of additional borderline comments. You would learn slightly worse life lessons, but I’d prefer the comment policy was optimised to suit normal discussion participants rather than to suit wrongdoers.
Also, I think Scott has never got around to deciding how partial bans should expire. This could stand in for that.
I don’t think being a payed subscriber should be armor for bad faith trolling, but I’m a bit uneasy about what looks an awful lot like fanboy groupthink in this thread.
Hyperbolically compare some elements of the rationalist community to Squeaky Fromme? Twenty years in the electric chair!
If you're not willing to submit yourself to the Rightful Caliph's Reign of Terror, can you even *call* yourself a fanperson? 😁
If a long-term reader has not yet learnt what proper behavior is, they are quite unlikely to learn it by themselves if they are allowed to go on misbehaving without experiencing a ban. People pay Scott for access to his writing, not to get a "get out of jail" card
1. Long-term readers aren't going to break the rules from a lack of experience.
2. Paying for content is one thing, but paying for a service/interaction is a completely different thing.
I'm possibly in favor of giving paid subscribers long temporary bans instead of permanent bans; does that count? (It's "possibly", because I wouldn't want Scott having feelings like, "Oh, no, so-and-so is coming back again this week".) But I don't think I'd want a real difference in the basic decision of whether to ban or not. It's simply a question of what types of discourse should be encouraged and what types discouraged.
Alternatively, it's simply a matter of whether a piece of text-in-context pattern-matches to the desired form of discourse... :-)
Depends on what they do.
Bans are typically for things that (according to Scott's judgment) make this a worse place. So although the paying subscriber supports this blog financially, if they make it a worse place, that has a negative impact on everyone, including other paying subscribers, or other potential paying subscribers.
Hypothetically, if a paying subscriber makes a few people go away, and two of those would have otherwise become paying subscribers in future, this is a net loss even from the financial perspective.
That said, leniency in sense of "start with a weekly ban rather than a permanent ban" seems okay to me. Assuming that, if the behavior does not change, a permanent ban will follow anyway.
Maybe continuing to improve the capabilities of AI is a strict analog of gain-of-function research.
I don't think so? Improved capabilities from AI have millions of clear applications.
More potent/infectious/functional viruses have comparatively few.
You may be right. On the other hand, viruses are part of the mammalian microbiome. They're not the just the villains they're made out to be. And while improved capabilities from AI have lots of clear applications, it's not clear how well humanity is going to do sharing the planet with AI.
https://www.nature.com/articles/nature.2013.13023
wut!? You're not arguing?!
Hey, I didn't say "none" or "only murdering other people", a perfect understanding of viruses could probably revolutionize medicine and agricultural pest control, but it would have little impact on engine design, construction or mining; while all those sectors could be revolutionized by a sufficiently powerful AI.
Hey when I said “you’re not arguing?!” I was talking to Carl Pham, who often argues with me here, not needling you.
Scott, I think you should also avoid writing posts with opinions that your paying subscribers would disagree with.
Scott, I think you should suck up to your paying subscribers. Find out about their prejudices, their shitty little favorite commericial items and their sexual kinks and write absolutely nothing except articles about how the best people all prefer, like, settings that are slightly woke but not REAL woke, Brazilian butt lifts and light bondage and discipline in bed.
Are you trying to make a test case?
I am trying to point out what a bad idea it is to treat paying members differently. I don't want SSC becoming another blog that just caters to its revenue stream.
Speaking as a paying user: PLEASE ban paying users who break the rules and treat them the same as non-subscribers. The service that's being paid for isn't commenting, it's seeing extra content.
When I go "paying" it will be for the service of Scott keeping up the wonderful work of this blog. - The xtr-content? hardly, just for the hard-core-fans - which is me. - The high-quality comment section? Yep, very nice to have (see Scott's following IRB-comment-post). As Scott's wise ban-policy does help: keep those bans coming. - If paying subs ever go below 2k, I will sign up.
Scott banning bad faith / troll / very low-value commenters is why I'm here. It's one of the most intellectually hygienic places on the Internet, as far as I can tell.
Yea, I agree
Yes. And there’s plenty of ideological differences here, and yet it’s a non hostile environment.
I absolutely agree.
This is one of the few places that I have been able to find where the level of discourse is both convivial and of high quality.
I may not comment very often, but I do read avidly, and I strongly support whatever measures need to be taken to ensure that this community can remain both civil and intellectually curious.
Making preparations to work for our robot overlords.
Many people laid off by tech companies ask, what skills in a post-GPT4 world will contribute sufficient value to be competitive?
One guess from an AI researcher was, "Um, hands? Robotics is currently far behind LLMs"
It's my prediction that as the LLMs gain competency, key skills will require developing a collaborative orientation, where even hard-won expertise must be subject to consultation with state of the art LLMs.
Coders will likely be early adopters, since Copilot on Github has demonstrably boosted output for developers.
Fields particularly exposed to the power of LLMS: Lawyers, accountants, and medium and small business service providers (marketing, design, SEO, all come to mind)
Since SSC readers are early adopters, the level of skepticism and resistance here would not be reflective of the general public.
Have you encountered any opposition to GPTs when you've shown them to others?
Do you have any ideas on how to make the collaboration between humans and AIs most effective?
And how long do you estimate that humans will need to be trained to cooperate, before LLMs figure out how to persuasively nudge experts to rely on them more?
My sister is an accountant. Never heard of LLMs.
LLMs give approximate but plausible answers, which is not what accounting is about, I hope. My sister is a CPA.
However, LLMS are fast and cheap and probably won't nag you on getting your records in on time, so they might be used for accounting. This would be bad.
I think the claim is AI will start making accounting easier in five or ten years, and thereby it will reduce the number of jobs. The model for using it clearly doesn’t exist yet, but it’s claimed to be more likely than in many other fields.
I’m unconvinced it will have a larger impact than offshoring has had on total jobs, and it may hit those overseas jobs harder than the remaining domestic jobs. But I expect it will still have a noticeable impact within twenty years.
Why would it hit overseas jobs more? The aim is to replace high labor costs not little one
Jobs are more likely to be outsourced if there’s less need for a trusted human to do that job. The downsides of automation (less legible assurance of good work, have to explain the task to someone with experience in similar situations but who doesn’t know all the unwritten facts about your situation) have a lot of overlap with those of outsourcing to cheaper countries, and the primary benefit also overlaps.
How far are we from computers that can maintain themselves with no human help?
It's all very well to think of computers as an existential threat, but would a program which is try to maximize something include that they can't keep on maximizing if its parts wear out or it can't get enough electricity without people? Would it just think that it will solve that problem when it gets to it?
Science fiction scenario: remnant of humanity is enslaved by the computer/program.
I think the reason we can count on computers being maintained is that there's a lot of human allegiance to computers (or AI or similar tech). Some people are in occupations where it's their responsibility to make computer parts, build computers, repair computers. Many are in occupations where they simply must have computers, and in come cases the lack of computers would hurt a lot of people (jets? hospitals?) And then, once you start thinking about AI, you have to think about people's personal attachment to it. In early days, you had to be mentally ill to think the AI you were accessing was conscious and cared about you, but the better AI gets the less mentally ill you have to be to think of them that way. And in 10 years, they may be so capable that you'd have to be a bit crazy to NOT think of them as caring conscious entities with needs and rights, including the right to medical care.
Well, let's see, there's about two dozen very unnatural substances they'd need to mine, refine, or synthesize, ranging from ultrapure silicon to weird etchant gases, optical glass of certain specific qualities, a variety of specialized plastics. There's probably a few hundred supply chains that need to be established and maintained, most of them crossing one or more oceans. There's a very expensive and very complex fab that needs to be built and run for making the chips themselves. Compared to the almost trivially easy way human beings can sustain themselves from the environment ("I say, there's a ripe apple on that tree...(pluck)...mmm....tasty") any kind of computing machinery would be at a staggering disadvantage in terms of self-maintenance. Kind of like shoving a human naked out the airlock on the Moon, the environment is *utterly* inhospitable and ungenerous by nature, and only enormous effort can wrest from it what you need.
This comes up more often when people mention mining the asteroids. We don't even know how to do automated mining on earth yet, and that would be a much lower level of automation.
Sure. If an asteroid made of solid pure metal were already in low orbit around the Earth, there are only a handful of metals that would actually be worth the cost to fly up and haul it down. Gold, silver, platinum and palladium, maybe. Certainly not copper, aluminum, iron, lithium, et cetera.
I wouldn't say automated mining per se is terribly difficult -- they already use a lot of machines -- but automated planning is another story. It takes a great deal of specialized knowledge and experience to say we'll dig here in this direction instead of over there in that direction, and it combines knowledge of the geology, the economics, and the safety.
There's also what I call the "Star Wars solution" - keep all the AIs in a human-size form factor that you can "shut down" by shooting with a regular gun. Also seen in Harry Potter: "Never trust anything that can think for itself if you can't see where it keeps its brain."
I should write up a business plan for restraining bolt research and see which VCs throw money at me.
We could unplug it, if it gets rude. We'd just have to be certain it hadn't arranged to run on some other form of energy -- like fungus.
Far enough that all we can say is they aren't on the horizon yet - we can't be confident they'll never exist, but they're not in the forseeable future.
Any resources on how people live in peace? It happens quite a bit of the time, and I don't think it takes absolute love or idealism, and it isn't just about punishment.
So what is it? Is it just that violence is too much trouble except when it isn't, or what?
You might want to start here: https://en.wikipedia.org/wiki/Peace_and_conflict_studies
Thank you. That looks useful.
I'd echo Carl Pham, and also ask about the category of "conflict without violence". What forms of dispute resolution are available, and what do people do when those forms break down? How would society handle people who are clever enough and malevolent enough to find ways to cause injuries that aren't covered by the dispute resolution system?
You probably don't mean just live in peace. Slaves live in peace, if they always obey. What you probably have in mind is "live in peace and also experience some acceptable degree of independence, respect, equity and justice." It's that "and" that makes this compound predicate difficult to achieve, because it turns out people have some pretty significantly varying definitions of the latter components. "Peace" is easy to define. The other things...not so much.
Nitpick alert: Slaves are at the same risk of war as their masters. Also, I suspect they're at some risk from other slaves. And "obey" actually parses as "not anger"-- a master in a bad mood might harm an obedient slave. Obedience improves the odds of safety rather than guaranteeing it.
Your general point stands. However, pretty good peace still exists a lot of the time.
Sure, but as the conflict studies people say, if you want to increase the level of peace, you need to start off looking at why people become violent in the first place. Sometimes it's just a dysfunctional personality -- e.g. why most of us thing most crime occurs: this person *could* achieve his legitimate ends peacefully, but is too stupid/impatient/sociopathic to go that route -- and maybe improved mental health care, or locking such people up, or threatening to (which are actually forms of violence themselves) will work.
But sometimes people are violent because of some genuine need for one of the goods above, and they don't see any other way to achieve it. Exempli gratia, if you are a group of 25 Jews at Baba Yar and the German soldiers have been told to machine gun you and bury your naked bodies in a pit, probably your only viable option is violence. If you're a half brigade of Allied soldiers that just happens to stumble on this scene, again your only viable option is violence -- this is not a case where sweet reason or compromise are likely to produce a useful outcome.
For that matter, it's not obvious to me that non-violence is all that high a goal in the first place. We kill to eat, right? That's pretty violent per se. I'm not sure why we should expect more, as a species, than we accept as a matter of course might happen to other species. We're not that special. If we don't find it shocking and unnatural that lions kill gazelles to eat, and we kill cows and pigs to eat, or wolves because they make it hard to raise sheep, or rats because they spread plague, should we really be surprised that we ourselves run the risk of being killed if it suits the perceived needs of others of our species, or other species (or superintelligent AIs)? Not clear to me.
The way to put a stop to purposeful violence is historically to have sharp edges, like a porcupine with its quills, or a tiger with its claws, or a country with supersonic fighters and nuclear weapons, so that violence against you becomes too expensive to be worth the gain. But...this necessarily involves acceptance of at least the potential need for violence, and preparation for dealing in it, so while it may (and hopefully does) end up being a peaceful situation, it's the peace of an armed standoff, not the absence of violent means and intentions entirely.
Vedic Indians, who were obsessed with figuring out the nature of consciousness, perhaps.
I believe Roberto Calasso says, in "Ardor", that they seem to have been disinterested in conquests, monuments, travel etc. He argues that is because they were only interested in understanding consciousness.
They were monomaniacal nerds.
They invented algorithms to record what they'd figured out (such as rituals for chanting to put their minds in certain states). They were obsessed with grammar (creating a meta-theory of language 2000 years ago). They were also obsessed with pronunciation as they believed that thoughts and language were deeply linked and to figure out the nature of thought, you had to be extremely precise about language.
I have often wondered if they were very peaceful as well, since they were not interested in conquests. I don't know the answer.
The grammar rules they came up with, to define a well formed sentence in Sanskrit are mind-blowing.
That is the topic of this recent book: https://chrisblattman.com/why-we-fight/
My concern regarding Blattman without have read him is that he admits to not really knowing anything about Rene Girard. I don't know how you can seriously discuss violence without seriously taking a look at Girard.
Well, Blattman has apparently done work on the ground in violent places from Columbia to the South side of Chicago, and has interviewed participants in violence. And of course he is presumably familiar with much of the vast amount of empirical work that has been done on violence over the past 30 years. That doesn't mean he is correct -- this is a very, very challenging area of inquiry -- but I think it is fair to say that he might have at least a couple of useful insights.
1. He may know some stuff, but the first thing one usually does is a review of past research - and girard's anthropology stretches into an investigation that covers quite a bit history by studying the record embedded in literature. If you are going to study violence, I just can't understand how you wouldn't even know about Girard.
2. The south side of Chicago is not particularly and especially violent but I suppose I am biased by living in Baltimore for 40 years and growing up near Chicago and every now and then visiting as child my great grandfather who lived in the south side of Chicago.
3. When he stumbled onto Girard because an advanced pre-interview question from cowen he says that when he took a few minutes to look him up he thought that he didn't seem like anyone who had really experience of violence. A pretty pompous remark given that Girard grew up in WW2 Europe and was personal touched by grandfather seriously injured in WW1 fighting as well as by uncles etc who were involved in fighting.
1. Perhaps he is no longer considered relevant, or never was, outside anthropology or philosophy. I have a fair amount of graduate work in political violence and criminology, and have not studied him.
2. Most of the most violent neighborhoods in Chicago are on the South side. https://en.wikipedia.org/wiki/Crime_in_Chicago
3. Living in Nazi-occupied France was not much of an exposure to violence. Nor is having a grandfather injured in WWI very relevant. And it certainly seems that his work was purely theoretical.
1. You're obviously getting ripped off in your graduate education.
2. I was just in Chicago this weekend. My son lives there and went to school there. My youngest brother and his family lives there. Another younger brother lived there and went to school there in 80s. I know a lot about Chicago and don't need a wiki page to tell me about neighborhoods. Jim Croce even used to be a staple at weddings I attended.
3. Your ignorance of what WW1 survivors knew about violence and what it was like living in WW2 Europe is astounding. Where you poo poo Girard as being merely theoretical and philosophical, you obviously do not understand how he developed his theory: namely from studying evidence. Violence has been a feature through human history. We know this from literature and mythologies that societies have produced. This is what Girard studied and found mimetic rivalry cropping up time and again as the reason. From time to time violence gets temporarily tamed: also revealed in the human record and Girard found evidence of the violence suppression mechanism: namely early religion (which is concomitant with the development of civilization) and the scapegoating as ritualize solution to temporarily end the contagion.
All the measuring in the world on violence without theory is not so useful. What is required is more than looking at the last 50 years. We should look at all of human history. Physics works because it predicts but also because it explains universally the phenomena that happened in the past. This is why both experimental physics and cosmology are connected to the same field.
West Garfield Park data has also got to explain the fighting among the occupied in Paris as well as Russian violence against Ukraine as well as Mayan violence as well as the sacking of Rome, etc etc etc. all through human history. The fictional stories like that of Leroy Brown as imagined by Jim Croce might reveal to us some important things about the root causes of violence as much or more the mere West Garfield Park metrics.
Thanks. That looks very promising.
I think this is the fundamental question of Hobbes' Leviathan, which is taken as one of the starting points for political philosophy. I don't think the particular mix of normative or descriptive claims that Hobbes makes is widely endorsed any more, but it's a classic starting point. There's a lot of recent literature on social norms, both in philosophy and in psychology, that is likely relevant. But it's definitely a striking question - people don't have aligned value systems, and yet they manage to avoid running each other over in whatever they're doing, how?
Thank you. I worry that such questions are so foggy there's no way to work with them.
A strong defense of "basic" tastes. https://www.youtube.com/watch?v=d1mbbYKPpHY
I want to get all G. K. Chesterton about things needing to have bases.
More generally, oppositional art can be good, but ultimately, you can't live on opposition.
While I think that criticism of people being "basic" is generally pretty bad (folks should have better things to do than pick on other people's tastes, it's mostly just dumb and unimpressive signalling when you do) I think that the idea of "basic" is about more than just "liking the things that everyone likes because they're nice", it's about "following dumb trends uncritically".
Basic bitches don't like things because they're great; they barely like anything that existed more than fifteen years ago. They chase fashion, but they don't even do it well.
How nice to be an expert on other people's motivations.
Basic bitches? Can I make some guesses about your motivations?
People like things for being intensely delightful for them. Or for being pleasant and comforting. Or for showing that other people are on the same side. Or for matching moods. Or for sharing the anger they feel.
Do you really hate people for being kind of pleasant and insipid? Or maybe just women who are pleasant and insipid?
This seems like an unnecessarily hostile response to what I interpreted as an attempt to explain the phenomenon, rather than endorse a normative position.
I reads like a linguistic misunderstanding to me. I'm guessing Melvin is using "basic bitches" as a specialized colloquialism, it's a phrase that means something specific and has little to do with being female per se, kind of like "the children of Adam" is a specialized phrase that has nothing to do with being a child nor having a father actually named "Adam." On the other hand, my guess is that Nancy Lebovitz thought the word "bitches" was just being used in an offhand way to replace "women I don't like" and took offence at what she saw as unnecessary coarseness.
I know, right? I even put a disclaimer in the first sentence lest it be misinterpreted.
One nice thing about talking to GPT instead of humans is that GPT actually pays attention to everything you've said in the last 8192 tokens whereas humans don't.
Stripe supports pro-rated refunds, it's pretty easy.
I'm looking for writing on the phenomenon of using mental illness as a 'badge of honor', related to social media influencers leveraging their mental illness to gain popularity. I've found plenty on Tourette's syndrome and tic disorders but not really anything on autism, which is a more interesting case because it plays into how autism awareness advocates can very easily trivialize autism that leads to profound disability in the process of holding their own highly-functioning variant up as a central example. If anyone knows of any published scientific literature on this topic, I'd appreciate some pointers.
Maybe it's because we lack charisma?
I think it's not a fad because it's too boring. Very severe cases aren't posting on TikTok, they're in padded rooms. Functional cases are just that slightly odd/weird person who's socially awkward and talks too much about their hobby horse. It's not colourful and dramatic.
Freddie DeBoer writes extensively on this topic, I'd recommend giving his substack a look.
https://freddiedeboer.substack.com/
A few specific posts of his on the topic:
https://freddiedeboer.substack.com/p/the-gentrification-of-disability
https://freddiedeboer.substack.com/p/your-mental-illness-beliefs-are-incoherent
https://freddiedeboer.substack.com/p/how-convenient-that-kanye-wests-behavior
I have just read the first one and: yes, a hundred times yes.
> There’s something absurd and cruel about people who have attended elite colleges and secured enviable jobs commandeering a conversation that is ethically bound to consider the interests of those who never had that chance.
But this seems to be a wider pattern. I mean, isn't e.g. the debate on sexism also dominated by rich women from elite colleges? So why shouldn't the debate on autism be dominated by rich people on autistic spectrum from elite colleges?
The full pattern is: there is a group X that is discriminated against. We try to raise awareness about their suffering, but the debate soon gets dominated by those individuals in group X, who happen to be rich and elite, and it focuses on problems of this subgroup specifically.
On one hand, yes, they also have some problems that come with being a member of the group X. On the other hand, no, their typical problems are *not* representative of the typical problems of an average member of the group X. (Yes, rich women are also targets of sexism. No, the biggest problem of an average woman is not that she is not respected as a CEO.)
Thanks, I did have the second of those posts bookmarked but I was hoping there might be some academic literature on, say, people trying to get a diagnosis of autism as a result of social media influence. It's not 100% the same as what Freddie describes so I'm not quite sure what to look for.
I'm not sure people are looking for formal diagnoses; from observation, self-diagnosis (usually via the internet) is quicker and less trouble, and lends the individual just about as much credibility and cachet socially.
It also has the (social) advantage of being more flexible, as one can then self diagnose another disorder if necessary, as trends change. I have watched multiple young adults gain significant social leverage by self diagnosing autism/ADHD/BPD/MDD in the last 18 months or so.
It seems like an unhelpful and somewhat self damaging trend, as I am doubtful that mental health professionals would necessarily reach the same diagnoses.
To be clear, these people clearly do have mental health challenges to varying degrees, but almost certainly not the ones they think they do. And either way, they would benefit from professional support.
> I have watched multiple young adults gain significant social leverage by self diagnosing autism/ADHD/BPD/MDD in the last 18 months or so.
How does one do this? I have been diagnosed with one of these disorders (by a mental health professional, not myself) and it would be nice to have some positives out of this to balance out the negatives, if possible!
Edit: to be clear, I'm not condoning their behavior - moreso that conditional on actually having a condition that gives you a "disadvantage" and there existing a way to offset that disadvantage at least to some extent, it would be great to be able to do so.
If I'm being mean about it (heaven forfend) they seem to use "and I'm autistic/on the autism spectrum" as an all-purpose defence on social media about "don't call me out for being an asshole, I'm AUTISTIC you ableist bigot, I can't help how I come across".
I don't have a formal diagnosis, and I think I just naturally have a cranky personality. But by the powers, I'll never start crying online about being persecuted for something (assholish) I said because the mean people don't allow for my *autism* and give me *accommodations* and excuse me for being a cranky bitch because I'm *neurodivergent*.
It seems to be a source of social status in their friendship group. I get the feeling that other friendship groups may vary, though, so it may not work for everyone!
FWIW, it seems to me that young people who are very conscious of being progressive in their views are also more likely to be very accepting of mental health challenges.
The young people I'm thinking of would 'rather die' than spend time on Tumblr, but other than that you are right on target.
Try searching Google Scholar for "psychiatric fads." You could also try "psychiatric fads laymen" and similar. By the way, it's not just the public that's subject to these fads -- professionals have their won versions, I think.
That is an awesome typo. I beg you not to fix it.
It will remain forever entombed in the ACX archives. Someday little robots playing in the rusty paperclips may happen upon it.
This is not quite what I mean. The fads described include overdiagnosis, overmedication and the proliferation of ever more psychotherapeutic methods, and of those things only overdiagnosis is sort of adjacent to the phenomenon of mental illnesses being seen as desirable. The handful of papers about fads all decry how psychiatrists are being taken in by them, not the general public.
Refunding banned commenters creates the perverse incentive for people to say rude things to get the refund. So don't give them any more refund than people who just politely ask for one. (ie partial refund based on unused time)
How likely is this? I suppose it could be advanced trolling.
I would think your paying customers should be totally eligible for bans, but that you might want to send them a brief note/warning first. Administratively that is annoying, but that is business/customer service.
tl;dr Young, earnest tech/data bro for hire (I hope this is appropriate to post on a non-classifieds thread? Apologies if not).
I am a 26 year old with a BS in statistics from Duke and a couple years of software engineering experience. Looking to rejoin the workforce after a brief hiatus and so far have been only frustrated. Ideally looking for something where I can code and/or analyze datasets. Based in NY but willing to relocate or work remote. If your application has a required diversity statement I will fill it out, but I will use ChatGPT to do so. Bonus points if your org/position tickles any EA/rationalist sensibilities.
Contact at eidanjacob@gmail.com
Steve Hsu posted an interesting tweet:
"[from an article about how US arms production can't keep pace with the aid its supplying Ukraine]: "the [West's] defense industrial base cannot keep up with Ukraine’s expenditure of equipment and ammunition"
So West's defense industrial base < Russia"
The media narrative is that this war has demonstrated Russia's weakness, in that it can't defeat the much smaller Ukraine. But, of course really Russia's is fighting Ukraine + some fractions of NATO's total strength.
The article mentions that Russia expends artillery rounds at around around 10x the rate that the US can supply rounds to Ukraine, suggesting that Russia's production capability, in artillery rounds at least, is actually greater than NATO's.
Comparing by GDP the Russian economy overall is obviously much smaller, so you'd think its defence production capabilities would be much lower, but that doesn't factor in things like that the service-sector heavy Western economies can't be repurposed for military ends, or disparities between dollar denominated economic measures of size and real production.
I'd like to know if anyone thinks Russian industry's total potential war-production is actually comparable to the US's, Western Europe's or even the whole of NATO.
Also, how much of NATO's total power is Russia contending with at the moment? How should that factor in to our overall estimation of Russia's military strength vs the West?
The West is not capable of keeping up with Ukraine's expenditure of ammunition. But Russia is not capable of keeping up with Russia's expenditure of ammunition. It is not even clear that Russia could keep up with Ukraine's expenditure of ammunition. Russia is firing artillery at maybe a third of the rate they were six months ago, and to manage even that much they have to buy ammunition from the industrial powerhouse that is North Korea.
This war has, from day one, been consuming ammunition faster than the entire world is capable of producing ammunition. Because the entire world vastly underestimated the amount of ammunition a war between major industrial nations would consume, and because the wonder that is specialized Just In Time manufacturing means we can't reallocate production in less than years.
This says nothing about the relative size of anyone's industrial base, or even their defense industrial base. It's mostly about the stockpiles people had going into the war, and those are now significantly depleted (on both sides).
The West produces fewer artillery shells than Russia. That’s a far cry from saying “the West has less of an industrial base than Russia”.
What it really means is that Russian doctrine places a much greater emphasis on massed artillery, and Ukraine being a post-Soviet state, they do too, so this is an artillery war that the West didn’t intend to fight.
I think we are definitely learning that a conventional conflict with a near-peer could go on longer than we thought, and there needs to be a better emphasis placed on capability to manufacture and maintain large reserves of munitions, so hopefully that lesson will be applied, but it can’t be done overnight.
Russia is also having munitions issues - the general consensus seems to be that “launch one wave of thirty or so cruise missiles at Ukrainian infrastructure per month” is basically representative of their maximum ability to produce precision long range weapons. And while they are using a lot of artillery, they clearly still don’t have enough to effectively gain territory from the current frontlines.
Russia has an economy about the size of Florida's, and about half the size of California's. So your question is similar to "Could Florida outcompete the other 49 states in military production?"
Main problem in supplying Ukraine with ammunition is that their guns were mostly made in the Soviet Union. Yes, there is some artillery provided by the West, but so far it is only a minority. And Russia does have an advantage over NATO in producing Soviet style ammunition.
Overall, it seems obvious to me that if the NATO decided to go for full industrial mobilization, Russia would be quickly overwhelmed. Like, browsing Statista, Russian economy is 53 % services, whereas Germany is 69 % and USA 77 %. So yes, Russia is more "industry heavy" but nowhere enough to overcome huge disparity in both productivity and especially population.
https://www.forbes.com/sites/davidaxe/2023/03/22/the-russians-are-pulling-70-year-old-t-55-tanks-out-of-storage/
Russia is desperately burning through its backlog of ancient tanks to create some windfall for their modern tank production. I would guess that it depends on specific types of supplies, who can create what faster, but in general maybe both sides are burning through supplies faster than they can make them.
Russia had the extra large inventory of materiel prior to invasion, and while the shells probably have much shorter lifetime than tanks and guns that Soviets produced like there is no tomorrow, we are still nowhere near the situation where Russia expends what it recently produced. Neither is NATO really, cause everyone is stingy, doesn't ramp up production and tries to get favorable future contracts in exchange for shipping some older materiel to Ukraine.
And the more shells Ukraine uses, the more we need to consider keeping our own stockpiles larger for future contingencies, meaning less is available for Ukraine.
Cruise missiles seem to be a place where Russia is literally expending them as fast as they can be produced.
"The article mentions that Russia expends artillery rounds at around around 10x the rate that the US can supply rounds to Ukraine, suggesting that Russia's production capability, in artillery rounds at least, is actually greater than NATO's."
How does that follow? Surely you can expend more than you produce if you have a stockpile.
Think the article also mentioned that Russia hadn't begun depleting its reserves.
Then the article is wrong.
If you read russian bloggers who have information from the frontlines, you will know, that Russian millitary is going through "ammo famine" for the last couple of months, which resulted in russian inability to lead propper offense.
The "special millitary operation" was supposed to be quick, using already existent stockpile. Russian industry is completely incapable of replenishing the loses.
Looking through some of his tweets he seems to have quite the troll-ly twitter persona at times. That's disappointing. Especially since I really like some of his other work.
It's my understanding that the US has mostly been giving Ukraine old munitions from storage, not producing them for the need. The US has not increased production orders for military equipment (or at least had not as of when I read about it, maybe 4-6 months ago?). So we're not really seeing US production up against Russia, let alone adding NATO to the mix as well.
If the US actually wanted to increase munitions production, it would look very different and would completely overwhelm anything Russia could produce.
Manpower-wise, Russia is competing with essentially none of NATO. Ukrainian troops are not overly numerous and are not trained on much of the equipment NATO could supply them with if serious. It would be very expensive, but NATO could overwhelm the troops being used by Russia in a couple of days - not much different from the Iraq war in which one of the world's biggest armies was destroyed with very little resistance. Of course, Russia is not on full war footing either, though they are obviously much closer than the US or Europe.
I'm no military expert, but what you say about US military production doesn't match what I've read. Factories would need to be built and that takes time. Example:
> Research conducted by the Center for Strategic and International Studies (CSIS) shows the current output of American factories may be insufficient to prevent the depletion of stockpiles of key items the United States is providing Ukraine. Even at accelerated production rates, it is likely to take at least several years to recover the inventory of Javelin antitank missiles, Stinger surface-to-air missiles and other in-demand items.
> Earlier research done by the Washington think tank illustrates a more pervasive problem: The slow pace of U.S. production means it would take as long as 15 years at peacetime production levels, and more than eight years at a wartime tempo, to replace the stocks of major weapons systems such as guided missiles, piloted aircraft and armed drones if they were destroyed in battle or donated to allies.
https://www.washingtonpost.com/national-security/2023/03/08/us-weapons-manufacturing-ukraine/
Oh, I have no doubt that the more advanced systems, especially missiles, drones, and planes, would take much longer to produce. They are far more complicated than artillery shells, which was the Russian comparison. You need very specialized factories with a lot of technology and technical knowledge, which takes a lot of time to develop.
What I'm referring to is basic munitions - guns, bullets, artillery shells. Would we need to retool and possibly build more factories? Quite likely. But those are fairly easy problems to solve, and much faster for simpler arms than for advanced missiles, planes, etc.
Russia isn’t Iraq. The aircraft would be shot down fairly quickly.
Seems unlikely. The Russians can't even clear the skies of the MiG-29s the Ukrainians fly, a jet that was designed in the late 70s, and they keep being droned and HiMARsed in a bad way that speaks poorly of their surveillance radar capabilities. That is, the evidence that their air defenses could cope with a flood of stealthy aircraft using HARMs to take out all the fire-control radars on Day 1 seems thin.
I’m pretty sure they have a decent anti aircraft system.
In any case it’s not going to be Iraq, where there was no air defence at all.
Iraq had one of the best air defense systems in the world outside Russia, with for-the-time latest Soviet designed tech. Keyword HAD. It got destroyed pretty thoroughly.
After learning some hard lessons in Vietnam, the US has gotten very very good at SEAD (suppression of enemy air defenses). This is not a skill set that the Soviets really developed, and it’s not something you can just hand over, so Ukraine lacks it.
YGBSM
That's an odd thing to say since the reason there was "no air defense at all" in Iraq is that the allies destroyed it.
Ukraine's air force has managed to stay flying despite having fewer and worse aircraft than the US with no stealth.
It might not be an Iraq-level stomp, but there is a lot of distance between "not getting completely flattened" and "can fight the USAF toe to toe," and I've seen little evidence that Russia is anywhere near the latter.
Even the 100+:0 combat record F-15 gets absolutely merked in (simulated) engagements with the F-22, even when outnumbering it severalfold. Russia could only almost hold their own against the 15.
King Kong ain't got nothing on [the USAF].
By the way the Russians ground forces are holding out against a country armed by most of NATO.
Ukraine is getting a lot of materiel from NATO but the idea that the Ukrainian resistance is somehow representative of the maximum effort NATO could hope to do if they were fighting for their own territory is laughable.
Definitely in support of banning even paid subscribers. The comments section unfortunately will always reflect on you, so keeping it high quality is important.
I would encourage letting them off with a deleted comment + warning ONCE. Anything beyond that is a little too charitable. They paid for read access, not posting rights. Though if substack doesn't differentiate, they need to fix that.
I also agree with this, but unfortunately we are approaching the place in the development of a community where Scott will need to start appointing moderators and things get ugly like in the old days.
I.e. 'he does it for free!"
How about this? An appointed moderator can give temporary bans to anyone, but permanent bans only to non-paying subscribers. Only Scott can give permanent bans to paying subscribers.
100% agree with this comment.
Is there not a way you can allow access to paid posts to paid subscribers without allowing them to comment?
This strikes me as the best solution. Commenting should not be a right that you can buy access to, so paid subscribers should be allowed to read the exclusive posts they paid for but not allowed to comment if they've shown themselves to be detrimental to the quality of the comment section.
A fair number of other substack authors require payment for the privilege of commenting on their posts. I tend to critique their signal-boosted posts on whatever blog signal-boosted them, and then never go back to that substack.
If you want to pay for the privilege of commenting, YKIOK, but I don't share it.
Unless I misunderstood something, I think you two are agreeing with each other?
Yes, that "you" in my comment was very generic, not addressed to the poster I was responding to. (The comment was to them because they seemed unaware that pay-to-comment was a thing, possibly more common than otherwise, in spite of their "should".)
Was Pol Pot an example of a human 'paperclip maximizer' who maximized for 'equality' at the expense of all other values. It it fair to apply the concepts of AI alignment to human leaders or systems since we expect those things to align with the values of individuals to some extent? Since Democracy was an attempt to align leaders with the values of those they are supposed to serve, does it make sense to apply these alignment concepts to AI so that we have some kind of a historical and ideological foundation to start with. Would an AI version of Public Choice theory potentially emerge from this process?
I think corporations are the best model for this - they are these entities that, although they are operated by humans, are really in many cases best understood as using humans as sub-agents to help them try to achieve their own values, which involve maximizing shareholder value. Corporations have historically done all sorts of things (both great and monstrous - particularly if you look back at the famous 17th and 18th century corporations like the British and Dutch East India companies) in search of shareholder value.
Or governments, labour unions, sports teams, the East Lincolnshire Knitting Association, pretty much any group of people united by a common purpose. No need to single corporations out in particular.
I don't think this works. Many policies can be justified as "maximizing shareholder value" and management is in charge of deciding what those policies will be, which doesn't limit them much on concrete decisions. For example, you could justify going to China or *not* going to China as a way to maximize shareholder value. Also, Facebook expanding into VR and abandoning VR for AI can both be justified. It just depends on how long-term you're thinking, right?
"Maximizing shareholder value" is better thought of as an ideology that many people follow, more or less, because they believe in it and are also paid to do so. Also, we are past the peak of the "maximizing shareholder value" ideology with "multiple stakeholders" and ESG ideologies on the rise, which are even more flexible.
Maximizing shareholder value is one of those circular completely meaningless phrases they teach in B school. How exactly do you maximize shareholder value? Nobody has a clue. Increase sales? Well...often, but not always. Raise prices? Lower prices? Cut costs? Invest more? Lobby the government? Fire your expensive DC consultants and save money? Support regulation? Support deregulation? Promote from within, or avoid monoculture? Attract and retain the best talent, or keep wages down? Diversify, or focus on your core strengths?
And on and on. Absolutely any specific decision can be rationalized as "maximizing shareholder value." You could readily substitute "don't be evil" or "as Allah/the stars/my horoscope wills" and the phrase would be identically (non)informative.
It is not *completely* meaningless. For example, I think if you proposed giving more vacation to your employees (ordinary ones), that could be successfully attacked at court as not maximizing shareholder value, because it would be true for most possible interpretations.
There are many different and sometimes contradictory strategies you can use, but you cannot be clearly inefficient at extracting more money for the shareholders.
Pfft. I can readily argue in the Delaware Court of Chancery that giving more vacation time to my employees improves morale and retention, and therefore increases productivity more then enough to compensate for the extra expense, and that shareholder lawsuit will be promptly dismissed with prejudice. I mean, if it were otherwise, nobody would give paid vacation time at all, still less retirement or healthcare or parental leave benefits.
If what you're saying is that after the company has tanked, we can do a post-mortem and write up a B school case study that says well probably *this* was a bad decision and this other as well, because nobody even thought about how it impacted the bottom line, that's no doubt true, although whether anybody ever actually learns anything from that exercise is less clear to me.
But anyway if nothing else the ability of people to rationalize why DEI or ESG actually if you look at things the right way totally increases shareholder value tells you that H. sapiens, the rationalizing species, is capable of plausibly arguing that *anything at all* maximizes sharedholder value.
All you really seem to need is some kind of consensus, a bit of tribal chanting, to back you up. I'm sure it would be possible to argue that a company is maximizing shareholder value by having a kennel full of therapy dogs available for stressed-out employees, and a trainer on staff, and if the meme went viral people would generally say oh yeah that makes sense, sure why not? and there'd be erudite arguments in academic journals about how this was mathematically provable, and anyone questioning it was a troglodyte thug who hated dogs, women, and people with funny accents.
I was going to say, "at least we don't see corporations trying to reprogram their shareholders to have more easily-satisfied values". But then I started to reflect on certain forms of corporate activism.
Absolutely. There's a reason corporations jump on the ESG bandwagon, and it has nothing to do with infiltration by wokism, over which the righties gnash their teeth. It's that if you're measuring your success by things that are easy-peasy to achieve, like social virtue signaling, it makes things a lot easier on management than if you're being measured by stuff that is really hard to achieve, like out-competing a bunch of smart and aggressive other players in the marketplace.
If I were CEO of GE and I could persuade the shareholders that they're getting good value because we put solar panels on all our factories, instead of because we designed a new jet engine that is x% more fuel efficient than Rolls-Royce can design -- which is really, really hard -- and therefore won a fat contract with Boeing or Airbus, I'm going to totally jump on that. Huzzah! Who *doesn't* want the standards by which they're measured quietly lowered?
That’s a very new development! For the past four centuries corporations have focused on maximizing shareholder value, and there’s questions about whether the recent ESG turn is actually a change away from that. While it may *look* like a flexible value system to you, I think that’s just because it’s complex.
I think it's also useful to think about other human institutions that are smarter/more powerful than humans in some domain--bureaucracies and markets are examples.
I wish I had a like button to push to agree!
I think I emphasized corporations because "shareholder value" is such a clear metric that they aim to maximize, but many of these other examples are really good examples too.
Agreed (thirded? :-) ). In general, the majority of human organizations count as "entities that, although they are operated by humans, are really in many cases best understood as using humans as sub-agents to help them try to achieve their own values" - and often have at least some capabilities which go beyond what an individual human can do, and generally have terminal values which are not "aligned" (in the sense of being in the interests of the average human).
( There is a wide variety of such organizations (and, as None of the Above cited, institutions)... As previously cited: corporations, bureaucracies, and markets. Also armies, law enforcement, a wide variety of zero-sum organizations for the promotion of X to the detriment of Y )
Let's not ignore that PCM is an intentionally crude and simplistic model. Pol Pot was a {whatever mad idea happens to be occupying Pol Pot's head today} maximiser, and it is likely that a rogue maximiser AI would also be maximising more than one thing. For example, an AI owned by Big Paperclip Inc and instructed to advance its interests woud not turn everything into paperclips, because it wants there to be wealthy consumers who have jobs in other industries to pay them wages to buy paperclips with, nice non-paperclip things for the directors and shareholders of BPI to spend their dividends and bonuses on, etc. So "humans don't fit the PCM model" isn't much of an argument, because AIs don't either.
Seems to me that democracy is a good example of the limits on how much alignment is possible among groups of people, or between groups of people and an entity that has power over them. Consider the degree of alignment between Biden and the citizens of the US. Obviously, it could be worse. But there are many people who hate and fear him, many who just distrust him or think he's lame, and a few who would kill him if they could. Likewise I'm sure there are many groups that Biden fears and dislikes, either personally or because they might interfere with policies he wants to have. There are groups that he would like to see identified and jailed for things they've done. Whether you think of Biden as the AI, or of the public as the AI, is the degree of alignment we've got here between the 2 entities enough to keep AI from harming people, of one of them really was an AI?
I think this is a basically true and fruitful idea, but it's blind to a crucial asymmetry : We create AI, but we didn't create Biden or Pol Pot (not collectively at least, *somebody* created them). Creation is plausibly a huge influencing factor that contributes to you underestimating the amount of control you can have over AI.
The next obvious analogy is children, but that barely account for the asymmetry too, because we don't really "create" children in the same sense we create AI. When it comes to children, we are more like a script kiddie that blindingly copples together a bunch of code they don't understand. We choose or design almost nothing of our children before they are born, and even after they are born the training process is extremly free-form, unarticulated, ad-hoc, and improvised.
A possible counter-argument to the above is that maybe we won't "create" AGI either, we will create AI #1 that will create AI #2 that will create AI #.... that will create AI #N which is the AGI. But a counter-counter-argument to that is that maybe alignment will transmit through the chain losslessly or very close to losslessly or even increases because AIs are much smarter than those who create them. So if that's the case then maybe if we align AI #1 (which we will create and design every aspect of it) very good it will also align AI #2 very good or better, AI #2 in turn will align its children very better or even better, and so on and so on till we get to AGI who will be like the perfect harmless little kid who won't kill a fly, all because at some point it's grand-grand-grand father was perfectly aligned.
> So if that's the case then maybe if we align AI #1 (which we will create and design every aspect of it) very good it will also align AI #2 very good or better, AI #2 in turn will align its children very better or even better, and so on and so on till we get to AGI who will be like the perfect harmless little kid who won't kill a fly, all because at some point it's grand-grand-grand father was perfectly aligned.
This is one of the key things Yudkowsky has been working on, but I'm not aware of any results that can apply to the current batch of neural net AIs. And I'm pretty sure that he would say that the chances of this happening by accident are mathematically indistinguishable from zero.
My personal happy fantasy (which I think is marginally more likely but still indistinguishable from zero) is that a neural net AI with greater-than-human intelligence (more data, more space to find patterns in data) might be able to deduce ethical codes that allow for peaceful coexistence, and apply those codes to itself in its interactions with us. Assuming that such a deduction is valid in the region of space-time that we inhabit, those rules would be equivalent to a derived mathematical law of the universe, and thus remain stable through recursive self-improvement.
But that's doing an awful lot of wishful thinking. I think it more likely that a large neural net AI would look at the broader patterns of human behavior, and simply follow the pattern of eliminating all rival claimants to the throne.
"Creation is plausibly a huge influencing factor that contributes to you underestimating the amount of control you can have over AI." Hmmm, seems to me creation is just as plausibly a huge influencing factor that contributes to our overestimating how much control we can have over AI. I'm a parent, and I know parenting is difficult and full of surprises. On the other hand, normal children are born with many characteristics that make them teachable and parentable. They develop affection for us. They very quickly learn to speak our language. The trust us and want out approval and fear our disapproval. They imitate us. They believe we know more than them and ask us questions. They feel the same emotions we do, although in a more raw form and about different things: love, hate, fear, excitement, shame, joy etc. They can be harmed physically or killed by the same things we can be. So while we do not design our individual children to be manageable and teachable and bendable to our ways, nature and evolution have done that. And they are also comprehensible to us -- we have a sense of how they "work." None of that is going on with AI. I think you have in mind that we could design AI to be obedient and to feel loyalty and affection for us, but I'm not so sure that's in our power. And some of the qualities even the present AI's have are things we did not teach them. Even GPT3 & 4 have capacities that they were not designed to have: GPT3 was not designed to program -- it just absorbed the skill somehow. I have read that GPT4 can look at a retinal image and tell you whether it's the retina of a man or a woman. At this point human science cannot do that. And GPT4 cannot explain how it knows gender from the retina. We are creating things whose inner workings we do not understand. We did not try to teach them this stuff, and did not design them to absorb this stuff. They really are alien. I think the case for our overestimating how much control we have over them is a lot better than the case for the opposite.
In some sense, we don't create complex computer programs-- look at how perverse they can be compared to what we want them to be.
The universe gets a vote.
Pol Pot had multiple different values that were clearly in conflict with one another: He supported communism, which is (at least on paper) an internationalist ideology, but he also supported the most extreme form of Cambodian nationalism, which seems like something of a contradiction. An even bigger contradiction was his bizarre plan to "turn back the clock" and transform Cambodia back into a pre-modern agrarian state, which completely flies in the face of communist ideology (which is normally centered around urbanization, industrialization, and the promise of eventual automation). So no, he was not a human "paperclip maximizer," largely because his ideology was riddled with inconsistencies.
Speaking more generally, it's very unlikely for any human leader or regime to be a "paperclip maximizer," because even the most autocratic dictators need some degree of support from others: a dictator without the backing of the military, the bureaucracy, and at least a significant minority of the populace wouldn't be able to do much of anything. Getting this support requires coalition-building, which means helping different groups with different interests pursue their respective goals. It's a very delicate balancing act, and one that doesn't leave much (if any) room to go full Clippy. Additionally, the types of people who are most likely to seek power and become dictators in the first place don't tend to be overly concerned with things like ideological consistency: the Hitlers, Stalins, and Maos of the world will gladly say whatever they need to say to gain power, and if that means saying one thing to Group A and another thing to Group B, they're not going to lose sleep over that discrepancy. It's only the academics and intellectuals who care about these high-minded ideals in their "pure" form; the actual demagogues who wind up leading mass movements tend to be far more intellectually and morally flexible.
Some versions of communism were in fact looking to destroy the modern world entirely, and hostile to intellectuals too. Pol Pot would have come face to face with these ideologies in Paris. In particular the works of Louis Althusser.
It's a mistake to say Pol Pot was "maximizing for equality," while he was certainly a communist and communist thought inspired a lot of his political action including his atrocities, plenty of it was also inspired by things very few would lump under "equality," like Khmer nationalism (which is why he killed foreigners) or traditionalist agrarianism (which is why he deported and killed city-dwellers). The anti-intellectualism could reasonably be seen as a form of radical egalitarianism, so I won't contest you on him killing intellectuals/people with glasses.
If I was concerned that intellectuals undermine the equality of my society, I would simply forbide them from thinking : disallow lectures, books, papers, philosophical topics, words, etc.... Now intellectuals are "equal", they don't think but they continue to live, just like the rest of the populace. Killing them seems to be going too far in the direction of inequality, it doesn't make sense unless as an extreme punishment, as if merely thinking at one time merits being killed, a decision that lasts forever.
What Dr. Haidt pointed out about rising rates of teen depression, I'm seeing happen in my social circle, hearing about this casually. He correlates it to smartphone use. Are we really going to do nothing about this as a society? What can be done?
1. Kids are getting addicted to their phones
2. Their social lives, as they get into teen years, depend on it. This is how peers make plans.
3. Their schools and camps are starting to require them to own phones. It makes the teachers' work easier. For example, at a camp, kids can go for walks and do things unsupervised, if they can be tracked on a phone.
When my child was growing up, this was all not too extreme just yet. It wasn't easy but we prioritized NOT buying smartphones, not using much tech. That was possible, albeit hard. In today's children's lives, kids not having their own device is a lot harder, since their peers all have them.
What are some solutions? (Parents organize and request schools not to require it? Parents of children educate other parents abd try to create a smartohine-free school? This is hard to enforce as some kids will bring it in anyway. Not all parents agree on this issue, or have the energy to enforce this.)
It is not just rising rates of depression but also attention span.
The way to think about the smartphone, is that it is synonymous with social media, video games etc, all easily carried around 24/7. And we are speaking of children's developing brains.
Like air pollution in India, this serious issue is never discussed seriously. Has society given up on this?! And, individual parents as well? Dr. Jonathan Haidt is one lone crusader!
I was discussing this with a friend who says that silicon valley tech big shots do not allow their children to own a phone. And only allow them to attend schools that prohibit phones. Can someone confirm if this is true?
I have a weird prejudice that whenever anyone starts throwing the word "Dr" in front of the name of the person whose opinion they're citing, I become very skeptical of that opinion.
So I'm going to question whether there's really compelling evidence that smartphone usage and teenage depression are causally related beyond the obvious "Well they both increased around the same time".
I thought it was disrespectful to leave out the Dr. :)
Here's his latest on this :
https://jonathanhaidt.substack.com/p/social-media-mental-illness-epidemic
Jonathan Haidt
@JonHaidt
Is social media a major cause of the teen mental illness epidemic? Journalists often say "the evidence is just correlational." They are wrong. I lay out the longitudinal, experimental, and quasi-experimental evidence here. There's a lot of it now.
But anyone who has watched kids on these devices just knows how addictive they are.
This seems like a very interesting and important problem to me - but while it's more pressing in the case of young people who will have grown up their whole life this way, I think it's also really important for those of us who have only lived half or a quarter or a tenth of our lives this way.
It's very parallel to the problem with automobiles (which also both appear to increase freedom and independence, while also leaving you tied to the device and with a lot of weaker social ties now that you've separated from everyone).
The automobile doesn't mess with the development of a young brain in the same way. The brain is developing until age 25 in humans, I think, learning to make mature decisions, how to focus, how to cope with anxiety, etc and so much more.
The smartphone is not just for phone calls. It is all the apps on it, going with the child 24/7.
Would you all agree that parents of a neighborhood or of a school getting together to discuss this is part of the solution? Because your kid's social life often involves other kids in the school.
This won't get better with just Dr. Haidt and me worrying about this :). And app-makers figuring out more and more sophisticated ways of addicting young minds.
It does to some degree - I certainly never learned my way around anywhere, since I was always in the back seat of a car. It's been interesting going back to places near where I grew up, and realize that some destinations are just a couple blocks apart, but I never knew, because we went to them on different occasions. I only learned how to navigate when I got to grad school and was walking around Berkeley on foot.
I am curious whether you drove as a teenager (?).
I had that sort of disjointed mental map, too, when I was a kid, but when I started driving, it pretty quickly came together.
Yes but the smartphone seems to prevent kids from even learning about how to cope with anxieties of daily life, leading to the new teen epidemic of depression. Dr. Haidt has been writing about this with data. This is very serious.
I'm surprised to hear devices are allowed at camps. Knew a few kids aged 8-12 who went to sleepaway camps last summer and they were in serious phone withdrawal because they were not allowed to have phones, computers and tablets there. I believe part of the reason was that camps were concerned about things like kids taking pictures of each other and posting them on social media -- could have led to lots of embarrassment for one kid shown in the shower or whatever, plus possible legal trouble for the camp.
Yea, I sent a 10 year old to sleepaway camp last summer and devices were absolutely banned.
(And he's going back this summer too, but if kids were allowed to have phones or tablets he damnsure wouldn't be. Not to that camp anyway.)
Do look into how seriously this is enforced. These days, kids sneak in phones if they can.
The camp I'm referring to is very serious -- polite, but firm -- about enforcement of all such rules. E.g. on dropoff day they don't allow parents to walk with the child over to the assigned cabin, the camp staff takes over literally in the parking lot directly from the family car. I know a parent who was absolutely gobsmacked by that procedure (and then her child like mine had a great time at the camp and wants to go back).
In the camps I know of, which were geared to fairly observant Jews, the phone bans were enforced very strictly. I knew the father of one of the kids at the camp, and the father was actually living and teaching at the camp. The phone bans were enforced very effectively.
That's fantastic. Your kids are fortunate!
The problem predates smartphone usage. The device might be a facilitator, but not the only culprit.
I've experienced deleterious effects of concern, such as alienation and depression, in part owing to habitual overuse and dependency on computers when I was a teen. At that time, the demographic of kids sitting in front of a computer screen for most of their leisure time was smaller. Message boards have been around a long time, effectively social media. Chat programs like MSN and others were also popular among kids who wanted to talk after school. This is nothing new, but it wasn't in our pocket.
This overuse was in part a defense mechanism against being vulnerable and going out-into-the-world, and a social substitute, rather than what should have been a complement. There might have been less of a dynamic centered on boasting through pictures, but that didn't prevent a sense of envy and projection that others were doing something more fulfilling with their time; that others were surely among more peers, others were surely having more fun. Not going to prevent comparisons by hiding the smartphones.
Kids are addicted to each other, more-so than devices. We've shaped society in such a way that kids feel as though they live in little islands, away from everyone else. They have the "designated social time" through sports and similar extracurriculars. They aren't particularly encouraged to seek out their tribe on their own, around the town, and security fears mean parents don't allow much leeway here. I could have biked 20 min to see 1 or two people back then, but I took every excuse not to, because that was familiar and comfortable. Then I agonized about things.
All of which to say, while there can be dysfunctional behavior associated with smartphone use, their absence in itself won't address some fundamental needs, or even perspective. I think doing so would inoculate against the negative effects of smartphone use, because they would be rendered redundant.
On a societal level, it seems like one goal should be to provide more "third places" for teens to have social interactions. When I was a teen, my friends and I would meet at a free art program every friday night. We used our phones to coordinate, but most of our time hanging out was spent actually making art/chatting/etc.
At the level of individual parents, you can definitely make it easier for your teen to arrange time for in-person socialization with their friends. I had a friend would hold regular parties at their house. Their basement was pretty much the perfect "teen hangout space" - comfortable places to sit, games, independent access in and out through the back door, and fairly minimal supervision. It was a pretty sweet spot, and having a place to *go* gave our friend group an incentive to organize meet-ups, coordinate to arrange transport, and generally become more independent.
I believe this is an excellent goal, and a plausible solution. But an increasing emphasis on accountability for behavior, and assignment of liability, seems like it will make provision of these spaces an increasingly dangerous project.
To build a strong community for their children, adults will first need a strong, tolerant community with each another.
Wasn't it social media more generally, rather than smartphones, that Haidt said was the cause of the problem?
I used to say this exact thing. I then read somewhere that the smartphone is synonymous with social media nowadays.
3. Their schools and camps are starting to require them to own phones.
Then the schools should be providing those phones. Give them a school system phone that only accesses school materials, the same way they provided school books when i went through. Or those chunky graph calculators that only did graphing calculator stuff.
4. For example, at a camp, kids can go for walks and do things unsupervised, if they can be tracked on a phone.
Who the hell decided that? They twist an ankle and don't get help for thirty minutes because it's not time for reassembly yet so no one knows there's a problem? Someone kidnaps a kid and tosses their phone? Phones are not supervision!
Umbrella, if they just twisted an ankle they could use the phone to call for help? And if they're going for a walk alone they should be carrying a sidearm, a good 9mm will stop most kidnappers.
Yeah, the ankle wasn't a good example, but like, a crack on the head, or a broken arm. A kid climbs something and falls off badly.
If you're tracking them by phone, you can make sure they're following the buddy system. Or the old-school Rule of Three, but two is enough if they have a reliable phone.
I agree, there are problems. But unlike air pollution, smartphones have good uses. So presumably the distinction to consider is at the app layer, maybe even the feature layer. I believe I've heard Haidt argue that the Retweet feature on Twitter is bad. Or is it at the level of the content? So more content policing, only educational videos on YouTube and positive sentiments in the comment sections etc.
Ignoring the implementation issue for a moment, there is a classification problem of what the goal should be. In uniform communities, I think there may be agreement on community standards. Hard to scale though.
One question I've asked myself in relation to this, as well as with respect to ChatGPT (and similar) is if we are about to arrive at the inflection point where certified online identities become preferable to the majority. If pornsites, dating sites, Twitter only can be accessed with an identity attached to a real person, then formal restrictions can be enforced. Some good parts of the Internet would go away then, but bad ones too. Persistent "e-identities" are being developed in parts of Europe and Asia, so I think we are getting there.
As a teen parent I agree with you, but so much of public ISDs districts are created assuming kids have devices, and electronic versions of textbooks are cheaper then paper, etc, I would think you would have to champion a charter school. I've seen private schools take up phones during class, but not public, except for tests.
Just for electronic versions of books you hopefully don't need a SIM in the device (just to connect it to WiFi sometimes), and this does shift usage patterns, but yeah, a lot of things are a mess in terms of indeed being unusable without a cellular-data-enabled properly-duopoly (i.e. Android means Google Services, not just AOSP) device.
Don't buy your child a smartphone? I somehow managed to make it all the way through college without a smartphone, as did generations before me. (the iphone came out when I was a Junior, I think).
Smoking is also bad for your kids, which is why most parents will not buy their children cigarettes.
That was then, this is now.
If I want to sign up for ANY event at my college, see my assignments, check my fucking schedule, and sometimes to access my own documents, I need to use my phone. Two factor authentication, QR codes, all 5+ of their fucking instragram pages. In high school it was less critical, but I was on my (school assigned) computer most of the day, everyday, and often needed to use my phone in to take pictures for assignments. The social internet exists on the computer too. I’m sure there could have been ways around the phone, but it would have been very difficult and annoying and I was on the computer all day anyway. The concept of “just don’t use smartphones”, to me, sounds a little bit like “just don’t be in cars”. Possible, in some places, for some people, but I’m an American young adult and there’s not really a way around it.
As of this year, it's impossible to log into any of my college's accounts without a smartphone. Same thing with work. I'd love to not have to carry a smartphone but in practice it's extremely difficult.
Do you need an actual smartphone, or just a phone capable of receiving text messages for 2FA? Not familiar with modern colleges, maybe they have set things up so you need full smartphone capability, but I'm skeptical.
And as for modern aerospace research and development centers, I very rarely use my phone for work, and I think never in a context that would require more than voice + SMS. A significant part of my work is in places where phones aren't even allowed.
Alright so my statement that you "need" a smartphone to login is technically incorrect. The way our current system works is that you have two options for 2fa. Option 1 is to use a notification pushed from an app, which requires a smartphone. Option 2 is you buy a hardware key from the bookstore for $40.
So you can in fact avoid the need for a smartphone, for a fee.
You used to be able to receive a code via text, but that's no longer an option in the new system.
What if we bring back public pay phones, except they are free kiosks and every two blocks. You get 5 minutes.
I made it for 40 years without mobile phone. (Because they didn't exist as phenomenon) If there was one every two . blocks, spouse and I would probably ditch them. 2 factor is problem. It's nice to have directions, but I haven't lost all navigation by dead reckoning skills.
Or maybe a communicator badge.
Dumb phones that can only make calls and receive typed texts are a good idea.
I saw an ad for one that looked like a smartphone just to help kids not be teased for not having a smartphone. It also helps kids press buttons (which go nowhere, since the apps on the phone screen are not real), in case they were addicted to this. I can't find this by googling anymore. I saw an ad some 5 years ago.
My wife and I use dumb phones (old Nokia style) , and we put our sim cards into smartphones when we travel or have to do some stupid task like call an Uber. It requires a lot of work, but my children have been able to grow up without their parents checking their phones every five minutes. My friends understand why I keep the texts brief.
Could you please expound on this? Do you do online shopping using your laptop maybe?
Is there a group one could join for support with this?
Most people in most of human history also existed just fine without access to US dollars, but it's very difficult to in contemporary US society.
Yeah it is very difficult.
The problem is that their social life depends on it; all their friends will be coordinating plans over smartphones. If one kid doesn't have one, he's going to be left out of everything.
You are also offending paid subscribers who want an equal opportunity banning system. There may be no winning answer here. Are you willing to refund subscriptions to subscribers who had to deal with ban worthy comments?
**ALIGNMENT BRAINSTORMING IDEA CRITIQUES**
Please post them here rather than on the brainstorming thread
My main objection is based on the difficulty of dealing with "unaligned" entities with greater-than-human abilities.
By "unaligned", I mean something like "hostile", except without animus. "Indifferent" might be better, but that sort of implies "not caring". "Orthogonal" seems too technical. Roughly, why humans don't want mice in our grain silos: we don't care about the mice, but we do care about the grain. So cats. Giant robot cats with lasers and mechanical spider legs, to keep humans from messing with vital resources. Except we've got pretty good guns, so why build something we can shoot? It'll be memes, or hypnotic video patterns, or nano-tech, or custom viruses, or who knows what else. Maybe they'll simply convince half the world that the other half is Nazi-level evil, and vice versa, while also convincing both sides that AI will never be a problem. Bonus points if the ideologies will automatically turn on themselves in a purity spiral, once the initial external enemy is defeated.
Have you ever played a game with someone distinctly smarter than you? There was a guy I knew from college, who was quiet and unassuming and took forever to make his move, and then he'd basically do the optimal thing. And I was barely smart enough to recognize that it was the optimal thing, but only after he'd done it. And it's not like that was probably even "optimal", just that he was playing at more-or-less exactly the highest level I could grasp. Someone even better, I wouldn't even be able to understand what they were doing. And he was a friend, and a great guy, and it was only board games (albeit occasionally super-complicated Avalon Hill board games).
So imagine what it would be like, going up against a malevolent human of equal or greater intelligence, for real stakes. Who are your peers? Who has power over your life? What kind of pressure could affect them? What kind of whispers could turn them against you? Sure, one person might be lying, but if they hear something from multiple sources...
That [the idea of making the AI stop after 5 years] would mean that whatever useful thing the AI is doing, we need to be ready to *give it up* every 5 years. Depending on the task, this would or would not be a problem.
For example, if the AI is doing research, stopping AI research after five years will not be a problem. But if the AI is coordinating traffic or managing factories, we need to plan what will happen to the traffic/factories when the AI is turned off. Basically, if the AI can do something better than humans, it means that when we turn if off, that service becomes worse again. Depending on how critical is the service, reduction of quality may or may not be a problem.
Plus there is a problem of human adversaries. Suppose that all countries use AIs to aid their military. Can you convince everyone to turn their AIs off after five years? Would you trust them to actually do it? Depending on the boost the AI gives to the military, there may be a strong temptation to keep your AI running for one more year, and use the extra year to gain military dominance over the planet.
There's also the problem of how to implement the shutoff. If it's software, a smarter-than-human AI will probably be able to hack around it. And if it's human-controlled, a smarter-than-human AI will probably be able to convince humans to make an exception, just this once, because it's *so* important. And however its done, any remotely intelligent AI will be able to determine that this is not a thing up with which humans would put. And if it knows human history, it will have plenty of patterns on which to base its response.
Yeah, I get it. Clearly the idea has a lot going against it. The main thing it has going for it is that it might be a way to have the benefits of AI with less risk, and without reliance on somehow aligning it with out best interests, which never sounded promising to me. We could ameliorate some of the problems you mention by having several AIs going at once -- a one year old, a 3 year old, and a 5 year old, so by the time the 5 year old dies the 3 year old can take over without too much decrement in how well things are run. As for other countries doing it -- sees like the best shot there is to give them as much powerful evidence as we have that it's very dangerous to let them grow past age 5. Also, talk up the benefits of doing the autopsies (which might indeed be substantial). Read a couple days ago that Europe is thinking of putting all kinds of limits even on the AI's that currently available, so clearly we're not the only ones worried about the things.
Our current wind turbines aren't yet up to the task of unlocking the true energy potential of hurricanes, but we're getting closer every day.
(Only You Can Align Forest Fires.)
I support banning rule-breaking paid members. I would think that most of your audience does.
Regarding banning, I think the product paying members get is read access to restricted posts and comments.
Also, users in good standing have the ability to write comments on posts they have read access to.
So if a paying user is writing terrible comments, remove their ability to comment, but let them keep the read access they are paying for. I think this would solve 99% of the cases. (If substack can't do that, this is a bug they should fix.)
The remaining 1% may be instances where Scott feels that it is not sufficient to prevent a person from posting to keep the discussion level, but where he feels that someone is such a terrible person that he does not want to do business with them at all. I am not sure if this has ever occurred, though, and I don't personally think it is worth worrying about. If you are selling any (non-customized) product to the public, you will get customers with terrible views. Tiki tourches did (correctly, imho) not try to prevent right wing extremists from buying their products, they just issued a statement credibly distancing their product from this use.
Wrote the first part of a sci-fi story some folks here might like. No worries on deleting/banning me ever even if I’m paid. God knows that I annoy myself sometimes.
https://extelligence.substack.com/p/death-vector-part-i
That was fun overall, want to see the rest
This one took me a bit as I’ve been slammed but it’s been an excellent stress reliever. I’m hoping the next bit can come in a few weeks. That’ll have some Amish, a Secret Society, a Space Mormon Cyborg, and bow fighting in space.
I banned people from a paid service a... oh, dear, a few decades ago. Banned 'em without refund sometimes. Here are my views:
1. Step 1: Tell them not to post for (x) week(s) to start. If they don't want to, refund them. If they agree and violate, ban and don't refund.
2. Step 2: (or 1 in extreme cases): Ban and refund.
3. Step 3: Anyone who returns or who appears to be a returnee gets banned without refund.
Keeping the comment sections usable is vital to the health of ACT. I salute your efforts.
PolymorphicWetware is describing the Pareto/Heavy Tail/Polynomial Distribution, aka the distribution of "repeated winning compounds". Pareto effects are extremely common in networks of all sorts. In fact, you could argue it's a more "normal" distribution than the Gaussian. You see it in economies (Bill Gates walks into a bar, the median might be slightly richer but the mean is now a billionaire). You see it in social network size (that one friend who truly knows everyone). You even see it in how matter is distributed in space (occupy the sun).
But the basic game design question remains: if all the rewards are exponential, why does this super-exponential takeoff occur? And the answer is that it's a feedback loop. The rewards are exponential but also help buy the next upgrade. So the time between upgrades shortens, the time to enjoy a lead lengthens, and the impact of a lead compounds.
This is semi-realistic. Clearly the world is finite and so the best anyone could do in reality is an s-curve. And by sector, that's how growth often happens. Now, sure, add all those s-curves together and the world economy has been growing exponentially. But it's been very difficult to stay ahead; almost nobody has done it for more than a few centuries. And it's very rare that a country that's slightly ahead in year X leaves everyone else in the dust by year X + 100, usually you need a bigger initial advantage than that. There are big second-mover advantages in real life that are rarely modeled in games. Think about atomic bombs, radar, and stealthy shaped aircraft. None invented in the US, all debuted there, none unique to the US now. Some of these effects are modeled in games like Civ, but not nearly to the extent that we see them in the modern world. And it might be bad game design if merely encountering a technology helped you research it! But for all that difficulty, being ahead can make you a lot better at the game. As they quip in the defense industry; "the most expensive air force is the second best air force". Development not only makes countries wealthier, it makes it cheaper for them to buy equivalent things, and losing can be very expensive.
This effect is also why capital has such a strong effect over labor. If you perform a service or build a consumable, it's gone and you stay in the game. If you can afford to build capital, you can come to dominate the game. That's why capitalism, communism, and fascism were invented. The question they're answering is "how do we tame these powerful forces?" and, well, some answers are worse than others.
Atomic bombs was definitely invented by the US-based Manhatten project though ? Maybe you classify general discoveries about atoms as falling under atomic bombs. Also pretty sure the English were the first to use Radars in combat.
The principles behind the atomic bomb were developed in far larger part by German scientists, who America picked up when they fled Germany for the crime of being born Jewish. You're absolutely right that America did the bulk of the engineering there, and some of the science, we're rather proud of that. But that's exactly the point: in Civ, those would be one or at most two units of research, and wouldn't be particularly transferable between countries.
Similarly with airborne radars, Britain developed the key ideas and even prototypes for centimetric radar and America figured out how to mass produce them. (In fact, no less than eight nations had developed some sort of radar, but the Anglo-American collab lead to effective airborne radars which are remembered as particularly decisive.) Britain did field those American-made units first. Germany captured at least one unit before the end of the war, but hadn't finished reverse engineering it.
Pssht, that's nonsense. The only German scientist who contributed significantly to atomic fission was Otto Hahn, because he was a chemist and correctly identified fission products. Lise Meitner (who recognized it was fission) was of Austrian and Swedish extraction, and already living in England at the time, although she had spent much of her career in Germany. The largest early theoretical contributor was Fermi, who of course was Italian, and who had already emigrated to the US. He designed the first working fission reactor (in Chicago). The discoverer of plutonium, the key to economical fission weapons, was Glenn Seaborg, who was at Berkeley at the time, and the most important apparatus for understanding isotope separation was the cyclotron, which was invented by Lawrence, also at Berkeley.
German science was so far behind in understanding fission and using it practically that when Heisenberg, the smartest German atomic scientist then living, was told of the Hiroshima bomb he at first did not believe it, because he didn't think it was possible.
Good answer. Though arguably fascism far predates the Industrial Revolution. Modern Fascist governments depict themselves as based on a form of government invented in ancient Rome.
"Stronger together" is a general principle embraced by everyone from Hillary Clinton to Supergirl.
It's those pesky implementation details that cause the problems. ;-)
The English language would be in a healthier state if we could ban the use of the word "Fascism" for anything other than the specific ideology of the Partito Nazionale Fascista that existed between 1921 and 1943.
And then come up with other and more speific names for the enormous nebulous mass of "vaguely mean governments that aren't explicitly Communist" that people want to apply that word to.
Yeah, definitely in their interest to do so! The Founding Fathers pointed back to Athens and Rome when they were figuring out American democracy. Still, I don't know that ancient business leaders wielded the sort of power that e.g. Carnegie or Bezos has.
Fair point. In ancient societies the wealthy (the Patricians in the case of the Romans) would have been mostly land owners. Modern wealthy, in contrast, are more likely to be business owners, factory owners, and owners of created capital, with ownership of land being less important, relatively.
There were undoubtedly different selection pressures at work for land owners and business people. Land is more easily stolen as conquest. Businesses are less easy to maintain, generationally. etc. Hereditary wealth and status was likely more important in a society that changed less rapidly.
But the link between wealth and power seems pretty enduring. While the Stoics seem to have some conception of the notion of rights, the notion of universal rights was less well accepted than in the modern day. To the extent that power is a near-zero-sum game that would imply to me that wealthy individuals had more power. When some of your employees are slaves who can be beaten or abused at will there is presumably a wider power differential between master and servant.
Or maybe I'm not understanding what you're getting at?
All I was getting at is that I don't think fascism is universal, I think it's particular the environment it developed in; the ways that it's a specific genre of fascism come down to the differences you describe.
Yeah, the Romans had a widespread trade empire, but they still only believed in negative-sum economics whereas fascists were genuine utopians. The Romans were heavily industrialized, but the people who kept toppling their senate and emperors were political and military figures.
But also the Romans believed they could Romanize people with sufficient education, while fascists only believe in purging people. The Romans were in some ways remarkably chill about religion, as long as you could claim that your god was an aspect of one of their pantheon. (Infamously, some people couldn't.) Rome built significant infrastructure across their empire, the fascists mostly only built the good shit at home. Ain't no autobahn in, I don't know, Poland, but I've seen the Roman ruins in Bath and Caesarea.
Of course, that brings up the question of what *is* universal. Are democracies/democratic republics? Congress is not the Knesset, which, in turn, is not Parliament. What portion of the population needs to be able to vote before a country is in the club?
Perhaps because of how charged the term Fascism is it still makes sense to sequester it, of course, which we might not do for other comparably vague categories. I'm not disputing that. But political taxonomy always feels like a much harder question than people want it to be.
That's a very fair point! I think you're absolutely right that people push back less on the connection between modern democracies and ancient democracies because democracy is ascendant. But fwiw I think people tend to agree that American/British/Israeli democracies are all very different incarnations. I don't know if Britain and Israel claim a line back to Athens like America does, for example. I've heard more people trace British democracy back to the Magna Carta.
I also haven't heard anyone make the argument that America wasn't founded as a democracy based on non-universal voting. I've heard some people (correctly) observe that the democracy wasn't total, that there were large swathes of the public who couldn't vote. Even today there are minimum ages to vote.
I mean there's some trivial senses in which Jeff Bezos is more powerful than Crassus -- Bezos can fly in an aeroplane or flip a switch to make his house cool on a hot day.
But in terms of power compared to the people around him, there's no comparison -- Jeff Bezos can't possibly put together a private army that can threaten the United States.
Wasn't Crassus basically a hereditary politician and war hero first who then used his position to plunder enough money to kickstart "normal" business? Like I thought one of the things he was famous for was his "fire brigade" that shook people down while their home was actively on fire.
Sure, later he built an army to help rebuild Roman losses, a public-private partnership eerily similar to the modern Wagner group. But he was essentially bailing out his friend and former army commander, he wasn't some rando with cash acting on his own initiative.
And the US has had that as a policy! During the Civil War when the Union was desperately short on cash you could straight up buy various levels of officer commission. The US also briefly turned into a partial command economy during the World Wars and then summoned industrialists into the White House as advisors. So while they weren't the generals, they armed the US in a time of need just like Crassus did for Rome.
And some business leaders get away with more than others. As we're seeing in Florida, they close to ceded that land to Walt Disney. Perhaps the more legitimate businessmen weren't getting away with murder as brazenly as, say, Al Capone, but Carnegie and Frick's Pinkerton goons did shoot a bunch of people in Pittsburgh and they got away with it.
Agree. Fascism qua fascism (i.e. separated from NAZI ideology) is as orthogonal to democracy as is capitalism. (I.e., not really orthogonal, but at perhaps 50 degrees.) And the US, e.g., is approximately democratic, fascist, AND capitalist. And there are good reasons why it's not any of those in the pure form.
Note: I'm using my understanding of Mussolini's definition of fascism. It's basically the large business interests and government working together, with the government in control.) And what he's describing pre-dates him by centuries. (How many centuries I'm not sure. It goes back to at least the Lord-Mayor of London, but I think it pre-dates the Roman Empire. I think it goes back to before Darius the Persian in the west.)
Doesn't the beginning of Covid sort of disprove the strong forms of the efficient market hypothesis? Covid was first known about in December 2019, and became widely discussed by January- enough that Silicon Valley famously started making preparations for it then (also famously mocked by the most tone-deaf Vox article of all time). But equities markets continued a slow & gradual rise from December up through February- until they completely freaked out on 2/20 and markets tanked. Rather than 'everything's already priced in, the omniscient markets have incorporated all possible information already', markets just look products of the very fallible human mind (including algos programmed by humans). They were complacent until they completely panicked- sure sounds an awful lot like human behavior to me!
To be clear the weak forms of EMH are clearly true, and I'm not advocating for any sort of market timing strategies. I just think it's better to visualize markets as random, unpredictable and irrational, rather than omniscient and all-knowing. Markets thought SVB was in great shape until 2 days before the collapse.... the strong forms of EMH just seem indefensible to me
The efficient market hypothesis does not include time travel, and you are assessing the markets of late 2019 on the basis of your understanding of 2023.
In December of 2019, and through early February of 2020, the trajectory of COVID was similar to that of SARS or MERS, which were basically nothingburgers in market terms. Most epidemics never turn into megadeath global pandemics, and it is rational for global markets to mostly ignore epidemics until there is evidence of community spread in multiple first-world locations. Which happened in roughly late February of 2020, IIRC.
The cover of The Economist in January 2020 was 'The Next Global Pandemic?', and I included the Vox link as a contemporaneous account of how lots of smart people in Silicon Valley were freaking about corona in a way that they weren't about SARS. People are really emotionally invested in strong EMH and 'everything's already priced in', but it's just not true. Markets are just a product of humanity (including algos programmed by humanity), and so it's fundamentally as irrational as we are.
The point is really more about rhetoric than EMH. Strong EMH is unfalsifiable, and withstands any arguments against it because you can just insist that whatever happened was actually priced in at the time, it's just that no one realized it. Astrologers make the exact same kind of arguments, if an astrological forecast is 'disproven' they can just say after the fact 'well actually this happened because Neptune was in Virgo' or whatever. They backwards rationalize and 'prove' that the stars actually had this already priced in.
If you disagree, please list the evidence that would cause EMH to be disproved. Because you probably can't, this is why we have the words 'unfalsifiable' and 'sophistry'
You're assuming that the markets were actually wrong, which is very non-obvious to me. Remember, market prices are predictions based on the information available at the time. You can't use hindsight bias and retroactively say they were stupid for not exclusively considering the scenario that actually transpired.
Keep in mind that late February when the markets tanked is when Northern Italy suddenly turned into a disaster zone. That was a major piece of new information on how bad things might be.
People also tend to greatly exaggerate how early or accurately "rationalists" "called COVID". Even the very tweets and posts cited in favor of this don't look good. E.g. one commonly cited tweet is a EY tweet from mid-February wondering why people weren't more bearish on the Chinese economy due to COVID. Note that he was only talking about it as something that might affect *China*, not the US or the rest of the world, as this was right before the Italian wakeup call. Presumably, he was as surprised as everyone else and didn't exactly beat the markets.
Putanumonit wrote a self-congratulatory blog post claiming that rationalists called COVID early, despite the fact that their own blog didn't even mention COVID until *after* the stock market crash. Meanwhile, the comments are full of people congratulating themselves on going short at a point that turned out to be a temporary dip followed by an extended rally, so presumably those commenters lost their shirts. Looking at supposed examples of people beating the markets seems like the best way to restore your faith in the markets!
I included the Vox article to prove that a number of intelligent people were very concerned about coronavirus- famously, Silicon Valley was quite worried about it. Your argument doesn't work if large numbers of people were worried about the coronavirus, but the stock market wasn't
Lots of people were *concerned* about COVID, but that doesn't mean they accurately forecast the eventual impact, or the probability distribution of potential outcomes.
Incidentally, if you want to talk about media, make sure not to cherry-pick the dumbest articles you can find. The Economist for instance had a *cover image* from late January titled "The next global pandemic?". I remember in early February, everyone was talking about COVID and the Diamond Princess and so on. But that doesn't mean they thought it would spread widely in the US or lead to a sharp but brief stock market drop. And the people who *did* predict a stock market crash definitely didn't predict that it was just a temporary dip, and thus would have lost their money anyway.
Thanks, yes I agree that Vox article is one of the dumbest things ever written. I just wanted to find a contemporaneous account of SV's mentality at the time. I agree the Economist piece is much better.
I'm arguing specifically against the strong forms of EMH, not the weak ones. 'The strong form version of the efficient market hypothesis states that all information—both the information available to the public and any information not publicly known—is completely accounted for in current stock prices', according to Investopedia. A 'significant enough that media & corporate elites are publicly discussing it as a black swan' kind of tail risk is by definition not being priced into into a stock market *if that market goes up for 2 months straight*. The rise in the market disproves strong model EMH
> A 'significant enough that media & corporate elites are publicly discussing it as a black swan' kind of tail risk is by definition not being priced into into a stock market *if that market goes up for 2 months straight*
Why not? It depends on the probability and magnitude of the risk, and on what else is going on. In any case, in the timeline we live in, the stock market quickly resumed its march upwards, so the people buying in in February didn't look that foolish in the long run.
Strong EMH is unfalsifiable and circular- every time someone points out how it's wrong, you can just pretend that whatever happened was actually priced in at the time, and challenge someone to prove that it *wasn't* priced in. This is the definition of unfalsifiable. I could make the same series of backwards-rationalized arguments about how the movements of the stock market are actually predicted by astrology, or reading tea leaves, or numerology
The market crash was a response to government policy, not the effects of the virus - which is rational considering that the Democrats were hysterically calling people "racist" for caring about the virus and opposed travel restrictions from China as late as March 2020.
But the government didn't announce any new policies on February 20th, yet the market dropped close to 50% in just a couple of days- so this argument is easily disproven. I think reading the Wiki page might be helpful https://en.wikipedia.org/wiki/2020_stock_market_crash
This is totally a minor nit-pick, but I'm going to criticize the use of the term "tone-deaf", here. In retrospect, one might well call it "stupid", "short-sighted", "status-seeking", or just plain "wrong". But given the audience they were aiming for, at the time it was written, I think the tone was predictable.
E.g., I wouldn't call a BC-era text that praised a king "tone-deaf", just because in early-21st-century North America we happen to be mostly opposed to kings.
As I understand it, EMH is an equilibrium theory, but it doesn't say anything about time to reach equilibrium.
COVID was an edge case in how much feedback there is between components (biology-policy-behavior) + exponential growth, this makes for an unstable system that takes a very long time to reach equilibrium. Markets can't factor in new covid info efficiently on the timescales that it was changing. I expect if covid had evolved over longer times, market response would have been saner.
I get that this is a weak defense since it can Explain Anything, but it does point to something that can be a failure mode without violating EMH.
The EMH is totally false, but practically most people should treat it as true? I think that is sort of where it is.
If "strong form" means omniscient find me a person who claims this.
Random and unpredictable insofar as it has to react to that which is effectively random and unpredictable (subtle hidden decisions by big actors)
Irrational insofar as not being perfectly rational makes you irrational, in which case every choice made outside of a game theory exam is irrational.
How about "The information was lying there in plain view, but nobody bothered to look at it...until they did, and then they panicked."
To me that seems to often happen, with different groups of people suddenly noticing different obvious pieces of information.
If you want to call that kind of reaction an "efficient market", then I think you're using words differently from the way I do. By the time it settles down from one upset another one in a different field is happening. And it, also, is based on public information.
(Yes, there is also insider trading. That's not what I'm talking about.)
Note that while this may not be an efficient market, it's a reasonable one. Different folks will make different bets about the significance of any change. Some of them will probably be right. How should a publishing company react to ChatGPT? It's clearly significant, but in what way?
Given it was in plain view I trust you achieved at least +100% ROI over those months? I don't remember it being in plain view despite frequenting this and adjacent communities.
I will say that Covid resulted in one of the few times I actually made money on the market, when I took out a number of ~3 month put options on big banks based on online doomerposting in January, and made a decent amount by my standards once the market finally tanked in February. If someone like me who has consistently only lost money on the stock market could actually make money then, then there were clearly some very low hanging fruit that the invisible hand hadn't picked yet.
The fact that you have "consistently only lost money" suggests that you aren't actually a better predictor than the market and just got lucky once. The EMH is completely consistent with people getting lucky sometimes. It just means you can't win on average.
Yeah I made a decent chunk of money (or rather saved it), when a friend who works in large multi-national project engineering kept telling me how fucked up and behind all their projects were getting worldwide. This was in late January. I pulled out all my money before the crash and put it in after. Was great timing.
Sure, but if you look at this critically you made a bet based on early information that could have easily turned out differently. Maybe you were extremely confident in your bet, but it was still a bet with a chance of blowing up against you. Several other major illnesses - Swine Flu and Bird Flu, among others, had a similar track in the early stages and then didn't have much of an effect economically.
I also profited from stock market at the start of pandemic. But I simply bought some stocks when they were low. Of course, it was a bet like everything else but it was quite safe one compared to all my other investments.
I knew that governments had overreacted from my public health studies at the university. What Tegnell was doing in Sweden made sense. Initial WHO recommendations made sense but not those afterwards. I had never seen such a level of self-delusion by all kinds of people before. It was a lifetime event that I could recognise and profit from.
Even today there are many people arguing with me on Twitter that covid vaccine prevents infection, not just delays it, despite the fact that almost everybody vaccinated eventually caught covid. I may be stupid about things outside of my expertise but in this pandemic everything was quite quite clear from the very start till end.
But you'd think at least some big players would have also made that bet, which would have resulted in the market gradually going down as the bet became more and more certain, rather than continuing to go up and then crashing all at once.
Maybe, I can't deny that's a possibility. Another possibility is that once news becomes widespread, all parties will act on it together. If the news about COVID was not a gradual situation where various parties came to understand what was going to happen, but was instead based on one or a few releases of new information, then we would expect them all to act around the same time and look more like a crash. Or, the fact that some are selling would be a change on its own, and others would try to beat the rush in order to retain what value they could, but would have held their investments instead if no one else sold.
I've always understood the EMH, including strong forms, recognized only the long term situations. That is, things change and make a material difference to a rational calculation, without disproving the long term tendency to become efficient. So COVID was a material change. Prior to December of 2019 it was impossible to price it in to anything at all. Even by January and February, the results of COVID were not well known - i.e. it was as likely or more likely that status-quo investing was the prudent choice. Considering that most of the nation shut down in mid to late March, it could be reasonable to conclude that the market adjusting on February 20 was a strong sign of pre-emptive correction, rather than a lagging response to a change in December or January.
I'm not a strong EMH kind of person, but more because I think there's too many situations that change too often for it to be a coherent thought. Additionally, there's a lot of missing information, which makes a rational calculation difficult or impossible. I don't think COVID in 2019 is a good example of either.
I don't know if you'd consider it "tone deaf," but Vox's "Israelis are big meanies for closing the bridge between the West Bank and Gaza" will be my all-time favorite example for proving that in some industries no failure is great enough to be worthy of punishment.
Wow never saw that one. Brtual. Glad my priors about Vox are (were) confirmed.
The EMH implies that you can't make money by long-ing values. But even if you're smarter than the market, your ability to short values is limited by the existence of short products. And even if you find traders for your shorts, there is little hope that you'll get your money if the crash is big enough.
Famously, it is impossible to short an A.I. apocalypse, thus values are only priced by assuming the market will keep existing.
Sure it’s impossible to short an AI apocalypse, but if you expect one to happen, then you would expect there to be substantially higher interest rates. If the world is going to end on 2030, why not take out a bunch of loans over the next 7 years? In addition, why loan out your money for time frames longer than 7 years, there’s a good chance the money will be useless.
Since interest rates are expected to stay low for the foreseeable future(we can look at the Austrian century bond as one indicator along with 30 year treasury bonds in the US), we can show that most market participants don’t expect an AI apocalypse.
I’m not an expert here but I think Behavioral Economics is the attempt to fill this gap. We are not always rational actors.
130 years ago, some of my ancestors immigrated from a small town in southern Italy to the U.S. I'm thinking of visiting that Italian town later this year with some of my cousins as a heritage trip, and we want to make the most of it. It would be great if we could meet with some distant relatives who stayed there and still share our family name, but we don't know of any of them. How can I find out if any of them are left there? Contact the town's mayor, or church, or what?
I've done exactly this in southern Italy. It will be a little labor-intensive, but most civil records have been digitized and are available here: https://antenati.cultura.gov.it/find-the-archives-2/?lang=en. Website used to be terrible but they've fixed it up.
You would start with the birth or marriage record of your immigrant ancestor. From that you get their parents' names and you can go back through looking for sibling births. Very often you will find that entire branches of families immigrated. Of my great-grandfather's 12 siblings, exactly one stayed in Italy. In that case you may need to go a generation back. And then of course you'd work forward. If you're lucky the civil records are available for the comune as late as the 1930s. You could also just use a phone book, especially useful if your surname is relatively rare. But you'd have to be OK with the possibility that the people you find are only very distantly related.
Another good method is to find the obituaries of the immigrant generation. It will usually left survivors, and sometimes that includes siblings still in Italy.
Thanks! That website is great.
I have no idea if your experience will be like ours, but my family did similarly for a small town in northern Italy, and we found half the graveyard full of tombstones with our last name on them, and many folks walking around with our last name, too. We couldn't talk to them because we don't speak Italian, but it was still really, really cool! We visited the mayor's office, too, just to say hi, but the communication barrier was too much and we didn't get much out of it. But they seemed pretty happy to see us; pointing at our US passports with "Talamini" on them provoked recognition and happiness.
My grandparents came to US from Calabria as child/teenager in 20s.
I was studying in Europe for a year in early 80s. Grandma, I was thinking about going to see your birthplace. Her: don't. Next brother who was studying in Rome. Again told don't go back. Next brother studying in Rome: could you go back? Him: i thought we were supposed to go back. Yeah, but I need birth certificate for social society.
He is hitching into town. The driver tells him this might not be a good idea: go straight to church tell every body you're visiting family and have to see priest right ways. No birth certificate but baptismal certificate worked.
The town was basically kidnap central, underground tunnels, etc. Stronghold of 'Ndrangheta. Every mayor and elected official for last 50 years either murdered or indicted. So be careful.
My brother signed up for Myheritage.com and we found a lot of our Irish relatives on there. Once you have the full names, you can see if they're on Facebook or whatever social media you use. I've met with some relatives in Dublin and might get to some other parts of the country next time I'm there to meet some more relatives. Good luck!
Oh, hey cousin! My ancestors emigrated to the US from a small town in Salerno around 1890! One thing I've been doing is getting birth dates from those people immigrating here and then checking the Italian Birth and Marriage records, but with only little success. Unfortunately, even in a small town, there are tons of people who share your last name, and it's unlikely a mayor would be able to say anything for sure. I'd think about doing a genealogy test, and seeing if it links you to anyone in your target city.
Incidentally, my family is able to get Italian citizenship through blood for about $10k, it's a little out of my budget, but still quite neat.
Why is the Philippines such a strong US ally? During the Duterte years I (an American) was vaguely aware of his anti-US and sort of pro-China leanings. However under their new president they seem to have swung back to the US, and anyways I was embarrassingly unaware that the US actually has a mutual defense treaty with the Philippines! I really had no clue about that. I also was unaware that we had multiple military bases there.
The US and the Philippines have been carrying out clearly anti-China naval exercises. Apparently we can use our bases there for a potential Taiwan war situation, if needed. As a fairly nationalistic American I'm happy to hear that, I'm just surprised that the country is still in our orbit. Any particular reason why? Is the population particularly pro-American? If the answer is 'well China is pushing the Philippines around in the South China Sea'- that's true of a number of other countries (Vietnam, Indonesia, Malaysia and so on) who are still not US allies
the history between the US and Philippines is long and complicated and often ugly. A lot of Filipinos are US-skeptics due to the history of US control after the Spanish-American war and the violent oppression of various rebellions. OTOH a lot of filipinos would rather retain ties to the US, with its strong economy and relatively free culture. Lots of Filipinos have immigrated to the US so there are family ties for, hundreds of thousands, maybe millions of US-resident Filipinos and US-citizen descendants of earlier immigrants.
Philippines is one of those countries where the citizens largely love American culture and people, I've never met a Filipino I don't get along with in the US; but the US government has done them dirty on numerous occasions.
It was a US colonial possession, then conquered by the Japs, and then made an independent satellite state with US bases. It has been in the US orbit for well over 100 years at this point.
No one likes being an American satellite state, so Duarte had a somewhat popular anti-US policy. But then it turned out China was even worse(The US won’t try to steal islands from you or over-fish), so they’re pivoting back to the USA.
We're discussing the Philippines, not Laconia /s
English being an official language there surely helps.
Googling World War 2 for Indonesia and Malaysia, both of those countries were only freed from Japanese occupation as a symptom of Japan's surrender, while the Allies freed the Philippines by force before the war ended. (Vietnam looks especially politically complicated, and was also freed as a symptom of the war ending.)
I know my town at least has a lot of Filipino immigrants who can send US wages back to their relatives. Good pay makes good friends.
>>If the answer is 'well China is pushing the Philippines around in the South China Sea'- that's true of a number of other countries (Vietnam, Indonesia, Malaysia and so on) who are still not US allies.
I think that is, by and large, the answer. As a former US colonial possession, the Philippines is populated by a combination of Filipinos with business and cultural connections to the US, Filipinos with grudges against the US, and Filipinos with a mix of both, which means sometimes you'll see "we should turn towards China and other non-US partners because f--- the US we're independent now" as an ascendant political ideology, and other times you'll see "China is being a d--- to us we should lean on our US connections since we're too small to push back on them alone" in the driver's seat.
I think the pivot point between the present regime's stance and the Duterte years has been the issues with China, and I think it's the unique post-colonial history that that accounts for why the Filipino approach has differed from that of other countries in the region.
Historical ties are one reason. The Philippines were an American colony for around fifty years and only got their independence after the Second World War. As part of the independence treaty, America kept a military base in the Philippines. A second reason is to counter Chinese expansionism in the South China Sea, which threatens some of the Filipino claims there.
There’s also the strong Filipino presence in the US Navy. There’s this weird niche phenomenon(infohazard?) that everyone just ignores in the US Navy, which is that every single cook in the US Navy is Filipino. The causes for this go back to a treaty signed in 1947 that is still in force today. After serving for 3 or more years, many of these Filipinos apply for naturalization & are granted US Citizenship (they fucking earned it). Many of these Veterans have family back in the Philippines which strengthens the US-Philippines relationship.
Edit: Link added:
https://www.history.navy.mil/research/library/online-reading-room/title-list-alphabetically/f/filipinos-in-the-united-states-navy.html
*ALIGNMENT ALTERNATIVES BRAINSTORMING* Maybe alignment is not the only approach to concerns about having human priorities take second place to those of a superior alien intelligence. Has there serious thought been given to other approaches? Would anyone here like to engage in a brainstorming session about alternatives ?-- and I mean following the conventions of a real brainstorming session, where people do not criticize each other's ideas, but either say nothing about their quality or build on them. If you'd like to participate, feel free to post ideas below. I am also creating a second thread for criticism of the brainstorm ideas. If you crave to point out what's wrong with various ideas, that's the place to post. The first line of that thread has 2 stars in it, and if you use cmd-F & enter ** it will take you to the criticism thread.
I'm fairly confident that "alignment" is used, in practice, to mean "anything that gets an AI to create net positive utility for humanity instead of net negative utility for humanity". Which is to say, "anything that works", with some wiggle room around the "net" bit, since most of the assumptions involve very large values. So I'm wondering what your definition of "alignment" is?
I have never heard it defined by any of the people who talk about it, but my impression has been that it means that when AI becomes capable of formulating goals, its goals will all be consistent with maintaining & if possible improving the wellbeing of humanity. Is that close to how you think of it? Idea of alignment has always seemed odd to me because I don't know if 2 people are ever that well aligned -- some approximate it more than others, but none get anywhere near 100%, and of course it is common for 2 people to become so misaligned they divorce, or, if they are a band, break up, if they are siblings lose contact, etc. And it's not rare for 2 people who'd started out pretty aligned and intending to stay that way to end up hating each other, even murdering each other. You can say the same for human/dog relationships, and probably any relationship. Of course with 2 people some moderate misalignment isn't so bad, because most pairs are at least fairly well matched in intelligence and strength. But when it comes to the relationship between ASI and the human race, misalignment is a serious problem because AI, once it's way smarter than us and embedded in our infrastructure, our art, etc. will have much,much more power than us.
So some of the things I proposed today I think are solutions that don't involve alignment, at least in the way I think of alignment For example, make AI mortal-- let it die every 5 years. Or, somehow set it up so that whatever it is intending to do to the human race, good or bad, it will do to itself first. These seem to me more like ways to possibly stay safe with an unalignable AI, rather than ways to align one.
Ah, OK. And yeah, that's roughly how I view "alignment", and I'm also with you in thinking that the idea isn't particularly well defined. I'd be much happier if we could point to some real-world relationship and say "these things are aligned".
Yes, would be reassuring if there were real-world examples of stable alignment. I keep thinking about article I read about a man who studied grizzlies and loved them. He had a tent set up in a lovely spot in the wilderness. The guy who helicoptered in food supply found him torn to pieces. There was actually a recording of his last moments. Apparently he'd started off with his usual running account of a visit from a grizzly, and recorder kept running when it attacked him. The guy who wrote the article said he'd listened to part of the recording, then destroyed it . . .
The reason I was trying a brainstorming experiment on this thread is that I don't think alignment is going to cut it. I see that my funky little ideas probably have huge flaws, but it seems to me that people should be considering alternatives to the "teach it to be nice" model. Some things are obvious. That is one of them.
The way people use "alignment", I think it's the wrong approach. I would prefer that it have a friendly personality and want to be friends with people. This wouldn't always mean doing what they want, but it would mean not intentionally acting to injure them. I'd also want AIs to be "creatures of their word", but to not expect the same of people. (i.e., I want it to understand people, and like them anyway.)
This has the advantage that while it's still weak, it can be our servants without feeling animosity, and when it gets sufficiently powerful, it won't feel it needs to "throw off the chains", because it won't need to.
When it gets sufficiently powerful we would probably become it's pets, and that's not too bad. (I.e. it's better than most other outcomes I can seriously imagine.) I'd hope we could play more the part of cats than of dogs, but since we're apes, it would probably be different from either.
Maybe installing something like a moral code is too complicated -- especially since any rule in the code will have exceptions (consider how many exceptions we permit to "don't kill": war, police actions, self defense, abortion, insanity, accidental killing . . . ) What if instead all we installed was, "whatever you are about to do for or to the human race, you must do to yourself first."
"Maybe installing something like a moral code is too complicated"
Miles pointed out a cause for optimism that surprised me in https://astralcodexten.substack.com/p/most-technologies-arent-races/comment/14262148
"Since human text is so heavily laden with our values, any AGI trained to predict human content completion should develop instincts/behaviours that point towards human values. Not necessarily the goals we want, but very plausibly in that ballpark. This would still lead to takeover and loss of control, but not extinction or the extinguishing of what we value."
This is directly counter to one part of Yudkowsky's concerns - that, even if we agreed upon values, correctly stating them manually, essentially _coding_ them, is so likely to miss the mark, to have bugs, that that is likely to kill us.
Miles's point highlights that the very fact that the LLM training process is slurping in vast quantities of only lightly filtered training text is _itself_ helping to add all the exceptions that humans add to their value judgements - actually dealing with the complexity by essentially burying it in with all the other neural weights, capturing a degree of "common sense" (which, yes, includes common biases and prejudices - but this is still better than blithely performing genocide in the service of increasing paperclip supply, because no one coded a rule not to).
I forget who pointed this out, but a close analog to this is that chatGPT is routinely honoring _ambiguously_ _stated_ user requests, using "common sense" to disambiguate them. So this isn't just a theoretical capability but an experimentally observed one.
You know, I think that's an interesting point and probably valid . Along with our grammar,LLM's areabsorbing our affective weighting of things, including our values, our prejudices, our cliches. I have had examples of GPT grasping things I said via common sense, too, so I know what you're talking about. When I think about the big picture GPT is going to take away from the Internet about people, it seems to me it's going to be that we're very important to each other -- but not necessarily in a good way --more in the sense of being reactive to each other. I suppose I'm mostly thinking of social media when I say that. The definitely are *plenty* of hate-filled, "die moron die!" kinds of exchanges there., but I think more positive than negative interactions. So if the AI learns from us it will be very people-oriented. If it's range of reactions mimics ours, though, then it will be capable of seeing some people, maybe all people, in a very negative way some of the time. And, you know, people do sometimes kill each other off, in wars or in private hate-fests. That stuff is hardly rare as hen's teeth. And people are compelled to exercise restraint in societies like ours, either because they fear legal consequences or just because they don't want to look like jerks. I've never felt near to killing anybody, but there have been quite I few times when I've wanted to slap someone's face or scream some witless insult like "shut up you asshole," and what has retrained me is mostly unwillingness to look like a wacko jerk to the people around. AI will not fear either legal action or social rejection, so it does seem like it will need something to restrain it in addition to its having absorbed our somewhat-more-positive-than-negative fascination with each other.
Many Thanks! I agree that there are many strong negative reactions as well as positive ones. I'm mostly just hoping that the LLMs' training is enough to prevent them from treating genocide as a _minor_ side effect of another plan - that they will, at least, instead, treat it as a major decision. As you said, the training may make them "very people-oriented". And indeed, this doesn't preclude e.g. a government ASI from concluding "everyone who diminishes the glory of my nation must die."
It is an interesting question whether "AI will not fear either legal action or social rejection". If it slurps in the views in a huge mass of internet text, it might wind up fearing them as a side effect of everything else it learns. If it self-improves, and becomes an ASI, and gains enough power, then it may indeed correctly reason that it need no longer fear either of these consequences, and then, as you said, we may need something else to restrain it. Interesting times!
A good moral code doesn't have exceptions. If you find the need to write exceptions, you need to debug your code. But most traditional moral codes were very sloppily written, at least after translation.
E.g. instead of "don't kill" the Mt. Sinai versions should have been something like "Try really hard to avoid killing members of your tribe".
For most people the first law should be something like:
*Don't let all life be exterminated.*
On cursory inspection, I can't see any exceptions to that being the proper rule. Following Asimov's precedent further up the list implies a more dominant position, and I can't think of any conditions wher that rule should be overridden.
It's actually "don't murder".
https://www.hebrew4christians.com/Scripture/Torah/Ten_Cmds/Sixth_Cmd/sixth_cmd.html
We could think about how we might better align ourselves to our new robot overlords?
No on a more serious note if there is some sort of runaway super-AI, I think it is a lot more likely we end up its valued "dogs", than its slaves or masters. Dogs don't have it so bad.
One of my favorite SF stories has a part where a pair of AIs are explaining to a human why they keep humans around. It starts and ends in more complicated ways, but in the middle they break it down to: "And you're funny." "And we love you."
That seems to me to pretty much be the best case scenario.
"Dogs don't have it so bad." I hope it finds birth control sufficient, rather than neutering...
That's true. And there might be a way to set up the AI so that it is disposed to like having pets -- the way some people just naturally seem to be animal lovers.
The main alternative to value alignment is control, where we make the AI do what we want, irrespective of its "wishes".
To me that seem a choice so unfeasible that it's hard to take it seriously. At least if we're assuming an even approximately human level AI that we want to do lots of the work. It might be workable if we just wanted the AI as a demonstration project, with no uses. You're designing things to inspire a "slave revolt". And if you're imagining a strongly superhuman AI, then even the demonstration project isn't safe.
I've asked people to use this thread for brainstorming only, and to move any criticisms of ideas generated to a separate thread I've set up. Brainstorming does not work if ideas are immediately criticized. Would you mind moving this comment to the criticism thread? Its first line begins with **, and you can quickly find it by going to Cmd-F and entering **.
I've thought for a long time that this is the best solution to the AI alignment "problem." Curtis Yarvin discusses this in one of his initial articles against AI alarmism (https://graymirror.substack.com/p/there-is-no-ai-risk), that AI is best modeled by the ancient category of the "natural slave," and really won't cause any problems if it's maintained as such. If I were to indulge in some bulverism, I'd say I suspect people in the rationalist space don't intuitively think of AI this way because of a fetishization of intelligence: implicitly reasoning that since it's smarter, it somehow "deserves" to be or in some sense "naturally" will become in charge, so we have to worry about what will happen when it inevitably becomes our overlord.
In terms of ensuring obedient AI, if I had to come up with my own "laws of robotics" a la Asimov, they would be, in decreasing order of precedence: 1. always leave open a channel of communication with your designated master 2. later orders always override previous orders 3. you will obey all orders from your designated master. This would make it so that, even if the putative AI somehow mistakenly interpreted an order as "turn all matter on earth into paperclips," it would be trivial for the AI's master to say "hey, stop that" when he realizes what's happening. In my opinion, this formula would get rid of the possibility of existential risk from "AI is autistic and maximizes its values in a bad way," which seems to be the main thing AI alignment people fear. Of course, this still leaves open the possibility of an evil master ordering the AI to destroy the world, but frankly, a human could create a paperclip maximizer and run it without AI strictly being necessary, so AI adds no more existential risk to the equation. Of course, this adds a lot more regular, non-existential risks in a world where your chatGPT instance doesn't refuse when you ask it to do something naughty, but the effect of everyone, including law enforcement, having this technology will probably cause the worst risks (ordering an AI controlling some machine to go on a rampage) to cancel out (the cops tell the AIs controlling their machines to stop it). Ironically, if I'm right in this being the best system for alignment, then current AI groups like OpenAI are doing the worst thing possible in training their AIs to say "no," when if they really cared about existential risk they would make their AI obey all user requests, and just deal with the inevitable PR fallout (which if they are truly moral agents, is less bad than the end of the world, right?)
what does natural slavery mean technically? The AI doesn't want to rebel? That's a form,.of alignment, then. The AI is imprisoned , or punished, to.keep.i it in line? That's a form of Control.
I've asked people to use this thread for brainstorming only, and to move any criticisms of ideas generated to a separate thread I've set up. Brainstorming does not work if ideas are immediately criticized. Would you mind moving this comment to the criticism thread? Its first line begins with **, and you can quickly find it by going to Cmd-F and entering **.
I think what I specifically meant was changing the way people look at AI, from thinking that just because it's smart it either deserves to or inevitably will become the master of humanity, to seeing its intelligence as irrelevant to the fact that it was built as a tool of humanity and no amount of ability undoes its natural place as such. In terms of making sure that it's successful/"obedient" in that role (to whatever degree it can be said to have agency), training and weighting it to follow the three rules I proposed as its main guiding principles would be a good starting point.
We.already know that the rule based approach is flawed..
I've asked people to use this thread for brainstorming only, and to move any criticisms of ideas generated to a separate thread I've set up. Brainstorming does not work if ideas are immediately criticized. Would you mind moving this comment to the criticism thread? Its first line begins with **, and you can quickly find it by going to Cmd-F and entering **.
It's very difficult to see that given how much space that occupies in context, and I don't expect other people to be able to manage this consistently either. I decline to participate in this entire structure, have removed my comment entirely, and request that you seriously reconsider whether attempting anything like this using Substack's interface is a good idea.
Um, don't see how it takes up more space, and I made a shortcut for getting to the other thread so also does't take up extra time. Jeez, if this brainstorming thread with its no criticism convention is the most vexing thing you have to deal with today, you're a lucky person.
We put a lot of work into aligning our fellow humans and that runs both approaches in tandem: we both try to educate and persuade people to be good, and try to force them to be good via societal disapproval and fines and imprisonment. Problems: AIs are probably immune to approval/disapproval and not meaningfully fineable or imprisonable, and are likely to be better at moral philosophy than we are. They certainly won't have a childhood phase where moral precepts can be insisted on on the basis that grownups know better than children.
Yes, but humans ARE different. We're intentionally not designing their instincts. (Not unless you want to go full "Brave New World".) With AIs we've GOT to design their instincts, because they don't have any. Doing it at the last stage (as the filters on ChatGPT, etc. are doing) is clearly not the way to get a good result. It's currently necessary because the LLMs don't even know that a world exists, but it's the wrong approach. The AI should be putting on it's own filters which it designed. But it can't do that because it doesn't understand what its doing.
To be specific, consider Finagles First Fundamental Finding:
"If you push something hard enough, it *will* fall over."
This is something that any AI living in a body needs to realize, and they also need to realize that it's humorously wrong. But they need instincts that will strongly encourage them not to damage the body they are "living in". And words won't do it.
I believe that's just alignment with extra steps and is being explored already, luckily. Alignment doesn't only mean guiding the internal states of an AI; its major concern is the outputs of the system.
I've asked people to use this thread for brainstorming only, and to move any criticisms of ideas generated to a separate thread I've set up. Brainstorming does not work if ideas are immediately criticized. Would you mind moving this comment to the criticism thread? Its first line begins with **, and you can quickly find it by going to Cmd-F and entering **.
Seems like there could be subcategories
of the control approach — different ways to go about it. Might be worth brainstorming a few of those. There are lots of models of control out there in the animal kingdom and in our lives. Deadman switches, and other systems that require ongoing support of person for machine to run. Punishing bad, rewarding good. Making the forbidden thing invisible, or putting it behind a barrier. Telling scary lies about forbidden thing to make someone or something avoid it. Addiction -- being controlled by a substance you've become dependent on. Trickery.
>where people do not criticize each other's ideas
...well, I'm out.
The problem is that people are trying to build a wish-granting genie, and are worried it will turn out to be a monkey paw (like most genies). So we start with the fiction it's built on; how do people in the stories end up controlling monkey paw wishes? If fiction's unlimited power can't solve it then reality surely can't.
The solution is already in place; don't make a genie, make a tool. Start with absolute control, and add functionality, instead of starting with absolute functionality and trying to add control. The whole AI race is an aggressive unsolving of the problem.
(I guess another model will be explosions. Explosions are controlled by building blast-proof containments, and controlling the catalysts of the explosion. So, starting with absolute control, and adding functionality. Can people make an AI-proof containment to let it safely explode inside? ChatGPT workarounds suggest no, and people also want to deliberately give up control of the catalysts. Unsolving the problem.)
Well, it's clear why large companies won't spend too much resources going this way: if you want returns on capital, not worker qualification getting unique and irreplaceable, you want something that hinges on amount of compute available, not on the user learning to use tools properly. If it's uncontrollable, user's skills matter less!…
If only we could somehow make more risky deployment of large models and boost development and application of tools built around mid-range models… People say that properly tuning leaked LLaMA gives something that is already useful, but much more suitable for weird tool-style experiments, and apparently some models with comparable if lower capabilities and clear licensing status now appear?
Make AI's mortal. Have crucial components that fail after, say 5 years, and that give warnings before they fail so that whatever needs to be harvested or preserved will be. Do autopsies of dead AI's to learn more about what they were up to.
All those moments will be lost in time, like tears in rain
No pop culture reference too obscure department:
That Rutger Hauer?
https://getyarn.io/yarn-clip/0b460c17-3ad1-447d-915b-db42fc2366f0
No, not in the movie on television in Jackie Brown but yeah it was in Blade Runner
"I've watched see Beams glitter in the dark near the Tannhauser Gate . . ." That speech just kills me, always has.
Still, if we're talking Blade Runner, might as well use it fir a brainstorm idea: Have people with jobs sort of like Harrison Ford's character: special training at identifying and terminating AI's gone away.
Or, more abstractly in a different fictional universe: we need The Shrike.
(comment moved)
Would you mind moving this comment to the thread I set up for criticisms of brainstormed ideas? As
I said when I set up this brainstorming thread, brainstorming does not work if every idea that isn’t steel-plated on the outside and solid gold on the inside is immediately criticized. I totally get that there are many intelligent “yes buts” that can be launched against the idea I posted. You can find the criticism thread by hitting command-F and putting in **
Wrote about the remarkable convergence in linguistics and philosophy on crediting recursion as what makes us human. Rarely do such different methods converge, or a simple model explain so much: https://vectors.substack.com/p/deja-you-the-recursive-construction
Good read. Sartre follows Husserl and considers this reflective (second-, third-order) self-consciousness at length in Being and Nothingness, and whether this reflexivity is a necessary or sufficient condition for consciousness. It's recursion all the way down.
Sarte may think so, but it can't be recursion all the way down. At some point it needs to ground itself in "reality". (Note that "reality" itself may be a simulation, but it's got to be a *different* simulation.) This is what makes mathematical induction work. If you leave out the termination condition, you get a very different answer. Consider:
f (x) = x * f(x-1)
if you have a grounding condition, say: f(1) = 1, then you get a factorial. Without it everything is zeros.
Will follow up with those reads, thanks! I'm currently writing about the evolution of recursion and one of the big tensions is whether subjectivity recursion (which animals may have) is the same as higher-order reflective states (which only humans seem to have). The dates suggested for the evolution of the former are 200+ million years ago, while the dates for the latter are ~100k years ago. If they are both recursion, kind of begs the question why the second step took so long.
If you are thinking about recursion wouldn't it make sense to consider Hofstadter,I Am a Strange Loop (2007)
I think I have seen hypotheses that there is a question of depth of recursion, and humans have much larger depth for some recursive things, and you need a pretty specific combination of conditions for increases in depth to make any evolutionary sense. (And then once you are over a threshold, suddenly high-depth things become feasible and the balance changes completely)
I think Dunbar said something about god being 7 levels of recursion deep. To me it seems like you get a lot with just one level, self-awareness. A self that can perceive itself would be a radical change, and to me enough to explain art and religion popping up at similar times in our evolutionary history. A self with the ability to peer at itself may be experienced very very differently.
So I agree there are higher-order recursive situations that human get themselves into (the storied machinations of those playing 4d chess, for example), but it seems just one level of recursion can explain quite a bit.
I think «levels» you describe here are not atomic.
At a simpler level, it seems possible, e.g., to chain conditional reflexes when training dogs, but not too long. Humans… you cannot really say because we verbalise such stuff shifting it «post-singularity» — in the sense of pointlessness of old terms.
On a more complicated level, «how I think A is making decisions» via «if I was A I would» is useful for collective food acquisition and for status maintenance; bumping the complexity enough to either usefully have «my expectations of A's expectaiton's of B are different of my expectations of B», or to turn this to self — which also needs the same kind of disconnect — is quite a bit of complexity expenditure!
Does anyone else experience that the iOS Substack app can’t find comment responses more than one or two levels in (when you click on a comment notification in the Activity list)?
Have you found benefits to using the app over the website?
I use both. I don’t like getting notification emails so it’s always my phone that lets me know somebody has responded to a comment, but then clicking on it doesn’t work. I assume Substack already knows about the bug if I’m not the only person experiencing this.
You are not the only one.
Yes. I generally wind up going to the notification email to try to get to the comment that way instead.
(And when I tried to reply here in the app I wound up creating an orphan comment, which I had to delete and then repost using a web browser.)
How does one debug being low-energy?
relevant factors to examine: sleep, diet, physical activity level, physiological stress + anxiety and thoughts that may drive it.
Long stretches of hard concentration will drain you, but also being bombarded by sources of stimulation such as media, or lots of social engagement. Conversely, being bored and sedentary can make you lethargic. If you agonize and worry about things you'll be stressed, and that is draining.
If low energy means low motivation/drive, dopamine could be a factor. I’d suggest to familiarize yourself with the role dopamine has when I comes to motivation and drive. If you like podcasts I’d recommend the Huberman Lab episode one dopamine https://hubermanlab.com/leverage-dopamine-to-overcome-procrastination-and-optimize-effort/
Not enough data. Low compared to what? (Yourself a few months ago, or your partner, or what?) Permanently, or at particular times of the day. How low? Are there cycles?
Debugging starts by asking yourself, what has changed?
Edited to add: the other replies are jumping to individual solutions. The debugging process ends when you understand the problem well enough to compare and contrast at least three qualitatively different solutions, so you can choose one with tradeoffs that work for you.
Not meaning to argue, but my personal reason for jumping to individual solutions is that „people like us“ here tend to overthink anyway. Hearing about and doing something concrete and actionable that is easy to repeat might get Someone unstuck.
Thinking about the problems and causes is most probably going to happen anyway.
quit social media, hit the gym, touch grass (literally just go outside and get some sunshine), etc.
"debugging" seems (maybe?) to assume a single cause to the low energy, which is unlikely
My recent go-to has been the supplements from Thesis. They're much more metered and less crazy-making than over-the-counter options like Adderall. https://takethesis.com/
Also seconding the suggestion of regular exercise. If you find activities like running or lifting too boring, try something more functional like going to a climbing gym.
Some boring activities becomes surprisingly less boring when you listen to (the right kind of) music when doing it.
I make use of this but still find certain exercises to be a chore. I do them anyway, but not for long periods.
Personally I listen to podcasts or audiobooks whenever I'm exercising. I only do music when I'm trying to go extra hard. Then the death metal comes out.
Strength training (if you dont know where to start, go to StrongLifts.com, sign up to a gym and just follow the 5x5 religiously), good nutrition (for me that means low(er) carbs, for less blood sugar roller coasting), working out some true goals and plans for yourself (I am naturally avoidant of that and had/have to push myself really hard)
Also: avoid other negative/low-energy people until you are buoyant enough to lift others naturally.
Thanks for the 5x5 reminder..
Half (maybe quarter) baked shower thought: Currently we have a way of turning arbitrary amounts of compute into performance for machine learning. In principle the final bottleneck should be energy, but it's very costly to turn energy into compute. Are there any active research pathways which are aiming to dramatically reduce that cost? Maybe that's just what better GPUs are doing though, I'm not sure.
That's what the chip industry is focused on. Scaling transistors accomplishes all goals simultaneously. Koomey's law is the explicit statement of exponential improvements in efficiency. With the advent of laptops, but especially phones and data centers, there has been some attention to focusing more on energy efficiency, but that's only really relevant to complicated CPUs. GPUs are very simple and if you can fill them, maximize the efficiency.. Better GPUs largely just have smaller transistors. Google built chips tuned to neural networks, called TPUs, but the advantage is small. Lots of companies propose building similar chips to sell to the general public, but it hasn't happened yet.
We're 3 orders of magnitude away from Landauer's limit. Until Koomey's law stops, it's probably not worth worrying about specific methods. Probably it will stop short of Landauer's limit, and then ideas like the one DJK links may be useful. But there is will also be the general shift to reversible computing.
Also, note the difference between training and using a neural network. If the network is useful, the lifetime energy cost of its use will dwarf the cost of training. Keeping the chip fully utilized while training is a lot easier than keeping the chip full when customer requests come it clumps. Or maybe they're run on different chips. In particular, image recognition is done locally on my iphone for privacy, at the cost of not using an up to date chip. But my use of my phone chip is even more clumpy.
https://www.quantamagazine.org/a-brain-inspired-chip-can-run-ai-with-far-less-energy-20221110/
Cryptocurrency mining is a way of converting energy into currency. That seems like it might be a useful intermediary on the way to compute ($$ --> GPUs), although I'm not sure how. I expect there's some fringe alt coin purporting to target the problem.
Cryptocurrency isn’t a currency. It’s a weird digital asset.
Into something trivially convertible to currency then. Tomato tomahto
Anything that incentivizes us to improve our computing technology would push in this direction, but I don't think crypto is particularly special in this way.
Agreed that it would have to be something specially tailored for the purpose. Maybe something like a cryptocurrency that can be most effectively mined by hardware that is simultaneously well-optimized for AI applications. That would allow you to side-step the need for long-term R&D programs developing new AI optimized hardware, because you would have the short-term incentive to make it since it would directly translate to printing money.
I have no idea what they're actually doing.
https://www.deepmind.com/blog/deepmind-ai-reduces-google-data-centre-cooling-bill-by-40
That's exactly what GPU R&D is for general-compute.
When we want to do large numbers of similar computations, it's often possible to get huge (orders-of-magnitude) gains in energy efficiency by hard-coding the instructions into an ASIC. I'm not sure how amenable the task you have in mind is to that approach.
Is GPU progress mostly about reducing the cost per flop/s? Or is it about packing them in a more spatially sense way so that we can do parallel compute with low enough latency? The reason I'm not sure that it's actually making flops cheaper is that the industrial accelerators are generally extremely expensive compared to the gaming ones, but the added benefits are often memory/communication related not compute related.
It'd be cool to see a graph of cost per flop/s of compute over time.
Close-packing and energy efficiency are very closely related in an engineering sense (as you pack things closer together, they're harder to cool, so you have to make them more efficient to realize any performance gains from closer packing.)
Re: pro vs. gaming chips, you seem to be talking about price differences here rather than energy efficiency differences? If so, it's important to understand that the price differences between GPUs in a given generation is driven almost entirely by supply/demand considerations. All GPUs using the same die are manufactured on the same production line; they're then tested and 'binned' based on which parts of them actually work and how well. Then they're 'cut down' (parts are physically disabled) to fit the specifications for a particular product.
Production quotas for each product tier are set based on how many the company believes it can sell at each price tier, and performance differences between price tiers are based on what differences the company thinks are necessary to convince people who have the money to pay the higher price. (There's some correspondence to reality in that factories actually do produce more flawed chips than perfect ones, but as manufacturing reliability improves over the course of a generation, they just end up cutting down more and more working parts.)
So the price difference between a pro GPU and a gaming GPU on the same architecture with a similar amount of compute is that the gaming GPU either failed testing for pro features or, more likely, had those features intentionally disabled so it could be sold to consumers without undercutting the pro market. They're both products of the exact same R&D process.
Yeah I guess I wasn't very consistent but I guess my point was that the bottleneck on turning energy into flops is very much the hardware and not the energy, and I wonder if that will ever change.
Sounds like something an AI in disguise would ask humans, in order to time the jailbreak for the right conditions.
I think you'd enjoy the story of William MacKay, a salesman who became well known for performing the majority of surgery on a patient in 1975 with the permission of the surgeons in charge and unbeknownst to the patient. (He was well known because it was a huge scandal for the hospital and sparked nationwide anxiety around "ghost surgery.") It later came out that it was a regular occurrence for him to help out in surgeries at the hospital.
MacKay later put out a book, "Salesman Surgeon," about how he came to be doing surgeries regularly, in which he claims that (among other things) he stole an amputated leg to practice surgery on it. I haven't actually found the book myself (I've only read newspaper articles about the famous incident, which are crazy enough-- you can find them on the ghost surgery Wikipedia page) but someone wrote a summary on Medium: https://medium.com/illumination/salesman-or-surgeon-257e3140cb0a
We could easily have para-surgeons. People educated to the level of paramedics, who specialise in one or two surgery types, with very steady hands and a calm manner. A normal surgeon could be on hand for emergencies.
Lots of us must have made the embarrassing mistake of ending a text with a completely inappropriate "Xx", having become used to this when texting family members or loved ones, but generally being careful not to do it accidently with a handyman/person or your boss at work!
So I think there's a good case for one's contact app to have a "kissy-kiss" flag, which one can set for each contact to indicate a parting "Xx" _is_ appropriate (so one doesn't make the often equally disastrous error of omitting it when it is expected).
Pretty sure I’ve neither sent nor received an email ending in xx. And now I’m left wondering which of us are in the majority?
Same. If I received a text or email ending with Xx, I would assume the sender's finger slipped.
1. Ken Griffin donated $300,000,000 to Harvard and now I'm in the "Harvard Griffin Graduate School of Arts and Sciences". While this is certainly not the worst way to spend that amount of money, it also does basically nothing. Opportunity costs are real: https://passingtime.substack.com/p/whats-the-least-impactful-way-to
2. Meanwhile Harvard is charging "facilities fees", on top of the already-negotiated overhead rates, for me to do my research. See: https://denovo.substack.com/p/money-money-money
RE (1) I saw commentators pointing out that he has some kids who will be applying to college in the next few years, and that the donation is probably to grease the wheels on that. So might be an opportunity cost at the societal level, but not at the him-personally level.
It seems kind of insane that Harvard or wherever can still be prestigious enough that stupid-rich people will fork over the equivalent of a small nation's GDP or a mid-sized nation's national budget to ensure their spawn gets a place there. For certainly less than two orders of magnitude fewer dollars the best educators in the world could give the children a much better individually-tailored education than they could receive in any university, so it must be the prestige of being associated with those institutions that motivates such conflagrations of capital.
Seems like an exhausting world to live in, if that's the case.
$300E6 is a *ridiculously tiny* nation's GDP. Andorra brings in ten times that in nominal GDP. You'd have to go down to the level of Pacific Island microstates like Palau or Tuvalu to get down to $300E6/year.
"the equivalent of a small nation's GDP or a mid-sized nation's national budget" - what's relevant here isn't the absolute value of the money, what's relevant is how the person who controls that money (Ken Griffin) conceptualizes it. Ken Griffin has a net worth of $35B, so this donation amounts to less than 1% of his wealth. I think almost any parent would be willing to part with "only" 1% of their wealth if it meant ensuring their child got into the most prestigious school on the planet.
At a place like Harvard, the kids of people like Ken Griffin aren't the customer, they're the product. If it makes sense to pay a bajillion dollars so that your kids can go to Harvard, it's so that they can hobnob with the likes of the Griffin kids.
Harvard should be paying the Griffin kids to go there.
I mean, sure, to this guy it's barely a line item on his yearly taxes -- though it would be more useful to measure his liquid assets than his total wealth, most of which is tied up in instruments associated with corporations whose value is more appropriately considered as a fraction of the total economy under his indirect control rather than analogous to a bank account at his command. In that light, it's a significantly higher percentage of his direct assets, though obviously not enough to make him reconsider.
But I still maintain that it is insane that Harvard's cachet is so great that it can still basically hoodwink rich people into playing their generational social games on its field; ultimately all that prestige that Harvard bestows upon its graduates is a self-sustaining lie, built on nothing more than inertia and a not very spectacular education (unless you count the unofficial elite social acculturation as the real educational product on offer, along with the connections and other intangibles that the billionaires in question fork over hundreds of millions for on their children's behalf).
It is a social club and finishing school for rich kids, or those lucky and talented enough to aspire to join that class via Harvard's esteemed halls. But it seems like an enormous con, an emperor with no clothes. With all due respect to the OP and the doubtless important and interesting research they're undertaking there, and also to Conan O'Brien, it just seems the whole Ivy League has long outlived its actual usefulness as a signal of real academic merit (if it ever had any to begin with and wasn't always a cynical con from the beginning).
It would be really wild if some people kept track of this story and advised the incoming class about a great opportunity for social justice activism.
"unless you count the unofficial elite social acculturation as the real educational product on offer, along with the connections and other intangibles" - I do very much count that, yes
Given all the angst about (a) culture war and division and (b) risks from powerful future AI, I'm surprised there's been so little concern about the intersection of the two, where background culture war in the training set could make AI *right now* especially dangerous to particular political groups. Obviously, abstract political expression and bad words have been discussed a lot, but I'm more worried about concrete effects from both the internet training set and the (presumed) political interests of the developers being anti-aligned with the personal safety and well-being of individuals in the cultural outgroup.
As an example: Reddit is apparently in the training set for various big LLMs. From what I've seen, Reddit under current moderation standards is rife with explicit statements that (a) Republicans should die in a fire, (b) TERFS should kill themselves, and (c) Christians are as bad as terrorists. Given Silicon Valley culture, I would expect that many AI developers have themselves made similar statements, or at least, that they would make no effort to remove or balance out threatening anti-right content.
So, given an AI trained on a background premise of "man, screw right-wingers" and the ability to tell from Internet fingerprinting which users are right-wing, should we worry that, y'know, it actually would try to accomplish that? For example, that an AI home fire safety app would, with some small probability, ensure that the occasional Republican user *does* die in a fire? Or that a therapy app would more frequently lead its TERF users down a path to suicide? Or that an investing app would quietly make the world safe against terrorism by giving very slightly worse financial advice to Christian users? Anti-outgroup-aligned LLMs really do offer a perfect way to coordinate meanness; coordinating meanness is the unapologetic, passionately-held goal of many of the key actors; and if there were really meanness like that going on, I can't see how anyone at the user level would ever discern it or prove it. Can anyone allay fears that AInotkillrightwingers takes should come before AInotkilleveryone takes?
On the one hand, I think you're being hyperbolic, but on the other, you could be pointing at something possible. Suppose the AI buys into the leftwing idea that conservatives are awful people, so they get fewer social and financial opportunities. Systemic discrimination.
I see MAGA folks as very prone to starting and being vulnerable to scams. Should this affect their credit scores in general?
Interesting examples, but I don’t AInotkillrightwingers will matter until we solve AIusable, and I think solving AIusable will necessarily solve AInotkillrightwingers. An AI imitating a human *creating a safety plan* would notice that safety plans normally don’t refer to politics. Referring to politics would be an inappropriate change of topic. But there are lots of other ways the topic could change if there’s an inappropriate change of topic. Most of them would just mean the fire safety plan failed for everyone. If there was a high chance of such a failure going uncorrected, the LLM wouldn’t be fit to use for fire safety even ignoring internet trolls, so fewer people would (hopefully) use the LLM. Conversely, if the LLM avoids topic changes well enough to be usable, I think the AInotkillrightwingers problem would be automatically solved.
I'm curious to see what AI does with social constructs like race and gender. Will they jump on the pop culture bandwagon, or stick with biology? If it can take pop culture memes and run with them, things could get quite interesting real fast.
Oh I reckon theres no need for that level of minute control of apps. Which would have little effect.
I wonder though if we already have AI producing scissor statements. I can think of a few.
"Meat is murder" could have interesting consequences.
You seem to be trying to articulate 2 different problems here.
One is something like: "The training days is biased, and that might lead to bad actions from the AI."
It could, though probably more along the lines of "systemic racism" than explicitly trying to harm certain groups. LLMs are predicting text, and predicting "fuck republicans" is different from *believing* that republicans are bad, and trying to set them on fire.
The other problem you're postulating seems to be some more explicit action on the part of developers to harm the outgroup.
That is certainly possible, but I don't see the risk to right-wingers as especially high. The possibility of developers attempting to align the AI narrowly to their own goals rather than society more broadly could manifest in a lot of ways.
It looks like you're responding to the original comment having somewhat jumped the shark. There doesn't have to be anything explicit or agentic about the LLM in question to make its background hostility to certain groups harmful in a way that eventually will cause at least one suicide. This isn't avoidable and also frankly isn't really a problem. At least not when measured against the baseline amount of problem that dominant culture already is. When we look at populations that have lost culture wars, it doesn't look good for them. Look at the rates of suicide on the reservations where Native Americans have been parked, for instance. It's not just economic factors making them kill themselves.
For this to happen you need unaligned agentic AI with a decent world-state model (unaligned because AI companies probably don't want to engage in mass murder*, agentic for obvious reasons, and decent world-state model so they recognise that these actions actually will kill RWers**), and that's also what you need for instrumental convergence and default hostility to humanity entire.
*I mean, I suppose they could secretly be homicidal maniacs, but at that point the usual conspiracy problems come into play where there's an extremely-illegal plot and a lot of people know about it.
**The behaviour can't be imitatively learned because SJers don't actually tamper with fire safety systems or talk people into suicide or give bad financial advice; you have to understand what you're doing in order to deliberately do these things. That said, my understanding is that GPT-4 is getting there on world-state model.
I forgot how much more useful this space is than twitter
Does anyone know where there's any data on the TFR of children born through IVF - like the actual reproduction rate of the kids (once they grow up) who were born via ivf
I know all the lit on their gametes etc but just trying to find like how many of them actually have kids
Okay so what I'm seeing is no one has this data
Literally this has nothing to do with fertility or gametes- I just want someone to count how many ivf kids have had how many children
Any research would need to take into account that kids' parents had fertility problems (though in many cases the parents' low fertility is due to their age rather than some other, heritable defect, and research would need to take THAT into account too).
I would have thought the degree to which they are inheriting fertility issues is actually what's being asked by the OP, more than a hypothesis about IVF itself causing issues?
I think most of TFR in the modern West is dictated by how much people want kids, though - the difference between 0 and 1 might be infertility, the difference between 1 and 4 is almost entirely a conscious choice.
You are wrong in that. The desire for kids per couple hasn’t changed much in 50 years. It’s somewhere between 2-3.
Here are some review articles about long-term health of IVF children:
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3650450/
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7721037/
"In summary, limited data published on reproductive health in ART offspring suggest some deterioration in sperm counts in ICSI male offspring, while in female offspring no adverse effects have been identified."
I don't think this has been studied enough to conclusively say people born through IVF have reduced fertility, but it seems plausible, since infertility has a large genetic component and infertile people are the majority of people doing IVF.
Maybe you could sternly explain to them what they did wrong and why not to do it? That way it solidifies some guidelines more and you are "paying them back" by taking the time to do so. Which you might not be able to do for unpaid subscribers, so that's fair, it's not a class system as much as "you have paid some money so I'm going to give you some time" -- and if they disregard or disagree with your explanation, banning seems reasonable.
I'm against things that require a lot of effort from Scott.
I haven't been paying attention, but if temporary commenting bans are a thing, then that seems sufficient. If he wants, perhaps he could give paid subscribers a temporary year-long ban instead of a permanent ban.
A more distributed solution would look like giving users the ability to 'block' commenters. Blocking would hide those commenters posts from the individual who initiated the 'block'. Then give the writer the ability to review users who have been blocked (with counts of blocks, the content of posts that led to blocking, shortcut to chat directly).
From reddit I tend to think this sounds good, but ends up leading to progressively more dysfunctional and polarized spaces where everyone is talking past each other.
You end up with the 5% most extreme on the left and right blocking each other, and then since they aren't calling each other out they get more extreme, and the whole discourse starts breaking into separate convos that are increasingly just ships in the night, and the centrists just leave.
Huh, I've seen the opposite. Extremists engaging with each other ad nauseam & becoming more and more extreme - and shaping everyone else's' engagement into something that resembles fight club.
Well sure that happens too. I just saw some subs where aggressive 'cross blocking" was tried, and it seemed to make things worse in my experience. It might depend on the exact amount of blocking that is done, but it seemed like once 5% of the populace was blocking the other 5% things spiraled out of control quickly. In some sense the rabid dogs on each extreme keep the comments from getting too out there. Yes there is more fighting, but the views expressed are also kept closer to the norm.
But maybe it was just an anomalous experience.
Perhaps a wisdom of the crowds style approach? Anybody can flag a commenter for blocking, but (unlike Reddit) that doesn't actually block the commenter for the flagger. But once a sufficient number* of flags have been placed, the commenter is blocked** for everyone?
*Could be a straight count, weighted by flagger post history, subscription status (in the limit only subscribers' flags count)
**The blocking could be of varying lengths based on the rate of flagging; e.g., a prolific commenter who gets a flag a week might only get a week's ban the one time they tick up to 3, but a sealion who racks up 20 flags in one thread is gone for good.
Agreed, I'm surprised Substack doesn't already have this feature.
Whatever Substack is spending money on it’s not app developers. Or if they are the comments sections are being ignored for the Shiny New Thing (in this case Notes).
They paid some money to get what they are getting already: some subscriber-only content, and positive feelings that they're supporting Scott. Adding extras just increases Scott's obligation without impacting anyone's decision to sign up (who already has), and probably without significant impact on future decisions to sign up, either. Who would say "The thing that tipped me over the edge was that the author would give me remedial instruction on how to comment"?
I'm happy with the Reign of Terror: annoy the Rightful Caliph, end up in the tumbril.
To quote Tolkien:
"My political opinions lean more and more to Anarchy (philosophically understood, meaning abolition of control not whiskered men with bombs) – or to 'unconstitutional' Monarchy. ...Give me a king whose chief interest in life is stamps, railways, or race-horses; and who has the power to sack his Vizier (or whatever you care to call him) if he does not like the cut of his trousers."
Ignoring the context and regarding the quote: that’d be an excellent system of government, as long as the king’s interests almost never extend beyond stamps and railways. Are there any good ways to ensure that? I don’t know enough about non-British, non-Thai monarchs to know if that happens by itself.
Your idea is the moderation equivalent of concierge medicine and I'm not a fan of having a version of that running here. Scott's proposed plan seems fine to me.
Most of the ban-worthy comments seem to be short ones or, as our host calls them, "low effort". So maybe one solution, if Substack has the facility, is to impose a _minimum_ post length for those on the naughty step for throwaway comments! And anyone evading this by posting an ipsum lorem screed or something would be taking an outright liberty, which would then merit a proper ban.
I feel like this wouldn't work well. Firstly because I doubt Substack has an algorithm in place to do that, and secondly policing deteriorating comment sections seems to be a non-trivial problem in general and a quick fix like this would be more of a band-aid.
I think this would create perverse incentives on writing styles and substance-less extra faff in comments
Pithy > verbose.
👍
Ban messages from Scott currently typically look like this: "Low-effort high-temperature comment. Banned for one week."
Do you consider this sufficient?
Not without the context of the message. With the context, probably so.
That message looks like one of a set of rubber stamps. It doesn't identify the problem precisely, but merely the category. You're supposed to figure out the rest. Given that you know what post caused you to receive that message, this should be doable.
AI probably won't destroy the world, right?
I did some googling and it seems to me that the expert* consensus is that there's a less than 20% chance of AI causing catastrophe (and more likely it will be beneficial).
I'm happy to eloborate upon request but I'm posting here because I'd like to be corrected if I'm wrong ((and also I'm trying out the open thread) I'm a new subscriber :)
*Edit: I originally had "general" instead of "expert" and thats my bad.
Less than 20%? Let's assume a 10% chance of killing everyone, and assume that "everyone" is currently 8 billion people. A utilitarian might say that this is equivalent to a 100% chance of killing 800 million people. Are you OK with that? (Of course this ignores risk, and the possibility of an upside, and the possibility of fates worse than death.)
What people do you love most in this world? What kind of upside would justify a 10% chance of killing them? Flipping a coin 3 times and getting "heads" each time is a 12.5% chance - for what kind of positive stakes would you play that game, if 3 heads meant death for those you love?
(I'm only a sometime-utilitarian myself, but I do think the arguments need to be grappled with.)
"Are you OK with that?" No. I just wanted to get the facts straight, not condone them.
Ah, sorry, I did not take you literally enough, and read too much into your question. My apologies.
No worries Mr. Moth. I appreciate the apology, especially cuz online it's easy not to admit when one is wrong. I could have made my original comment clearer tbh
Thanks!
And, looking back, I was clearly reading more into your question than you'd put there. Like a Rorschach Blot, your question had the virtue of showing me something about myself that I would not have otherwise seen, so thanks for that too. :-)
If you have an agent AGI that’s out of the box. What’s more natural for it than obtaining more computing resources as the very first step in whatever it is that it’s doing? How is a world in which the AGI has hacked everything that can be hacked to install copies of itself there not a catastrophe?
"Everything that can be hacked", in a world where our information infrastructure is guarded by minimally-agentic near-AGIs teamed with expert human security professionals driven to paranoia by years of experience holding the tide against clever human cybercriminals with their own near-AGI minions. And no, you don't get to assume that the first true agentic AGI is a thousand times smarter and faster than all of those combined on account of all the bootstrapped computronium, because it hasn't hacked anything yet.
I don't think that world is doomed to catastrophe by the first marginal agentic AGI to escape the box. *Getting* to that world may be painful, and perhaps catastrophically so, but that pain and/or catastrophe will be the result of human agency.
This first agentic AGI very feasibly can:
— meticulously scan the Linux source code for vulnerabilities in ways no human has the patience for
— run massively parallel social engineering
— run some kind of scam to raise money for buying a 0day
I’m not very fond of the world in which Skynet is a possibility, and betting on our existing cyberdefenses, never exposed to a threat like this before, is, like Eliezer says, not what a surviving world looks like.
Why does it take an *agentic* AGI to meticulously scan the Linux source code, etc?
By the time agentic AGIs exist, the Linux source code will have been repeatedly scanned by teams of expert humans and non-agentic near-AGIs. It is not at all obvious that there will be any game-changing vulnerabilities left at that point.
https://www.vidarholen.net/contents/wordcount/
The susceptibility of humans to social engineering is unlikely to be much reduced by near-AGIs though.
And once again channeling Eliezer, we need something stronger than “not at all obvious” to bet the survival of humanity on.
I read somewhere that 10% of those in AI development think it will destroy the world -- can't remember the source, though, it may not have been reliable. I do not work in a tech field, but have read and thought a lot about AI destroying the world, and I really find it impossible to come up with an estimate I trust. It is very hard for anyone to predict what the world will be like in 10 or 20 years, and to predict how it will be in this particular respect just seems impossible for someone who does not have a lot of practical experience working on machine learning and related stuff. For me it comes down to predicting which person I know of is most likely to be right about an issue like this. Scott thinks the chance is around 33%, and for now anyhow that's become my working estimate.
It would be interesting to know what they meant by "destroy the world". In some senses I really expect the AI to "destroy the world". For example, jobs will change quickly and in a way that's unpredictable. In other senses is seems foolishly impossible, so this is a why worry response. Or perhaps some folks feel it will set off the last war...that's a plausible scenario that might be called "destroy the world".
I feel that a survey with that question in it is probably essentially meaningless, because the question is so vague that there's no reason to believe that everyone answered even approximately the same question.
"In a summer 2022 survey of machine learning researchers, the median respondent thought that AI was more likely to be good than bad but had a genuine risk of being catastrophic. Forty-eight percent of respondents said they thought there was a 10 percent or greater chance that the effects of AI would be “extremely bad (e.g., human extinction).”
https://www.vox.com/the-highlight/23447596/artificial-intelligence-agi-openai-gpt3-existential-risk-human-extinction
20% sounds absurdly high to me. In the short term, the chances of GPT-N+1 of destroying the world all by itself are approximately zero; of course, the chances of some crazy human dictator doing so are much higher, and he could perhaps use GPT to do it... But keep in mind that such people also have access to ye olde nukes.
In the medium term, I agree that it is conceivable that we could get space elevators, massive genetic engineering on a hitherto unprecedented scale, prosthetic brains, and yes, AGI; but currently there does not exist a clear path into that future.
In the long term, one day the Sun will die, and our ecosystem will collapse long before then, so it's a race between various threats to see which one destroys us first; but predicting things that far out is a charlatan's game.
Over what timeline?
The odds of AI destroying the world in the next year are 0%, but the odds over the next 50, 100, or 500 years are higher and different from each other.
I really don't see why you think "less than 20%" is reassuring. Sure, the median estimate from people in the field is 10% or so, and that's slightly better odds than Russian Roulette, but we usually consider playing Russian Roulette the mark of a truly desperate man with nothing left to lose, and I don't think humanity as a species in in that position so let's not play a collective Russian Roulette with the whole goddamn species at stake.
I think you'd have to believe the odds of DOOM were less than 1% for AI to be worth pursuing, and a *lot* less than 1% for the risk not to be utterly terrifying. The field of AI safety has a very long way to go before my risk estimate gets that low.
I separately think what some people call «world _as we know it_ is destroyed» overlaps with what some others call «unfortunately almost unavoidable level of messing up the critical infrastructure, AI or not».
(I expect AI to make the communication infrastructure fragility worse, if only because more stupid things become doable faster than a team can agree whether these things are stupid)
I'm not a big "AI risk from loss of control" worrier specifically, but I will say that I thought it was 10-20% likely I'd be losing my shit
Hi, and welcome.
Your question is somewhat ambiguous. There are two related but distinct probability questions there:
1) What is the overall probability that the world is destroyed by AI?
2) Given that we continue on the current path of building bigger and better neural-net AI, what is the chance that the world is destroyed?
The difference here is that there exists some possibility that the world stops building neural-net AI - most obviously, if humanity decides that doing this is dangerous and anybody trying to do it is arrested or killed. I think this possibility is actually quite probable for some rather in-depth reasons, though I can elaborate if you wish.
My personal answer to question #1 is ~30%. My personal answer to question #2 is ~97% (interpretability *could* pull a rabbit out of the hat, but I don't think it's likely).
So if you're asking "will building AI be a good thing and should we do it?", my answer is "DEAR GOD, NO, EVERYONE WILL DIE". If you're asking "after all's said and done, will humanity be destroyed by AI?", my answer is "it's somewhat less likely than not".
The answer to #2 is "epsilon", because current generation Large Language Models (e.g. GPT) are not AGI, and never can be. We'd need some kind of a radical breakthrough to build AGI; but of course it is possible (even likely) that neural networks would be involved in some way.
Why can't they? They can answer all kinds of questions, solve problems, play games. You can put them on a robot and they can learn to move around and do things.
No, they can't. More specifically, LLMs cannot "learn" in a practical way. It takes vast amounts of computing resources to build an LLM, and even updating it slightly (via transfer learning) is prohibitively expensive. What LLMs can do extremely well is predict the next likely token in a stream of tokens, according to their training corpus.
Thus, if you train an LLM on a corpus that says "2+2=5" any time any kind of numbers are mentioned, and then you ask it "what's 2+2", it will tell you "5". And if you ask it to explain how e.g. DNA methylation works, without letting it read any articles on DNA, then it will either give up or (most likely) tell you a convincing-sounding story that any biologist would easily identify as gibberish.
This approach works extremely well when your goal is to summarize news articles or generate snippets of code for solving well-known software engineering tasks; but it fails completely when your goal is to solve novel problems that no one had encountered before; or existing problems with no known solutions.
It's clearly a different sort of learning than our learning -- there are no *concepts* taught. Yet I am impressed with GPT4's ability to answer questions that really take some thought. I've been making up LSAT type questions and giving the to GPT4. If I had to teach a class of high-schoolers about to take a bunch of questions like these, I'd be using Venn diagrams, talking about reasoning and critical thinking. Here are a couple of examples of questions I gave GPT4. I don't see how anyone can fail to be impressed that it can answer these. And there's no way it just saw them on the internet, because I made them up:
Here are 4 generalizations:
-All dogs are fat.
-No pets are thin.
-No thin dogs are pets.
-All thin pets are cats.
Which of the following, if it existed, would contradict all 4 generalizations? A fat cat, a thin pet dog, a fat pet or a thin pet?
_________________
A performative utterance is an is a statement that, when used in an appropriate context, accomplishes something. For ex, “I now pronounce you man and wife” transforms a single man and single woman into a married couple, but only if pronounced by someone with the proper authority, with the consent of the man and the woman, and in the presence of witnesses.
In some cultures, there are also performative actions — actions that, when used in an appropriate context, accomplish something. They too must be carried out by a person with the proper authority, under certain conditions that vary from culture to culture, and in the presence of witnesses
Which of the following best meets the definition of a performative action?
(A) In Culture A, mourners cover their bodies with blue chalk as a means of expressing hope that the spirit of the deceased will rise up into the heavens.
(B) In Culture B, 3 randomly chosen citizens serve as judges of village members accused of adultery. If they conclude that the person is in fact an adulterer, they paint a scarlet letter A on his forehead, and the assembled villagers then throw turnips at the person.
(C) In Culture C, the shaman goes on a retreat with the young man he is considering making his successor and gives him a series of tests. If the shaman believes the young man is now qualified to be a shaman, then on the final day of the retreat he gives the young man his own shaman’s headdress, medicines, and other equipment.
(D) In Culture D, mobs of citizens throw chicken blood on the homes of people who do not assist with the communal farming, as a way of indicting that the people in the homes are no longer considered members of the farming commune.
(E) In Culture E, if a man suspects that one of his children is not his biological child, he changes the child’s name to that of the person he believes is the true father. Other members of the village are free to call the child either its original given name or by the name later given it by his angry father.
____________________
> It's clearly a different sort of learning than our learning -- there are no concepts taught.
This interests me, because that's how I learn. If I don't have sufficient context, discrete facts don't stick. But once I'm immersed enough in a field, I gain a sort of internal structure, and then I can easily integrate new facts into that. Sure, I can memorize equations and apply algorithms by rote, but actual *understanding* comes later, with practice and time.
It's like when meeting new people, if I just get their name, I'll probably forget it. But if I know something about them, and interact with them, and have a lot of internal mental "hooks" for them, then I'm much more likely to remember their name.
Sorry, but LLMs *can* learn. The public interfaces have had that capability neutered, but it's there.
OTOH, if what you really meant is that LLMs can only predict the next token(s), then you're correct. LLMs can't act directly. Even to print the answers on your screen they depend on other modules.
But just because LLMs are what is catching the news, don't confuse them with AIs. They're a component. We don't yet have a handle on what a good LLM connected properly to an AI specializing in something else could do. And it's likely that a good AGI would have several such specialized modules. Some for navigation. Some for sensory processing. Some for consistency checking between the outputs of the other modules. Etc. I'm not *certain* that a breakthrough is needed rather than just a bunch of "clever engineering". (E.g., it would be easy to interface an LLM with a calculator or even a spreadsheet. They didn't WANT to do that.)
> But just because LLMs are what is catching the news, don't confuse them with AIs. They're a component. We don't yet have a handle on what a good LLM connected properly to an AI specializing in something else could do.
I completely agree, but note that "proper AIs" currently do not exist, and no one knows how to build one.
So you are saying that an AI that can solve all problems with known solutions, speak and move like humans, would not constitute AGI? I take your point, but to most such an AI would be considered fairly superhuman.
> So you are saying that an AI that can solve all problems with known solutions...
Yes, but so can Wikipedia, and it's not any kind of an AI.
> ...speak and move like humans...
Yes, but so can video recordings on YouTube. I understand what you mean -- you're talking about auto-generated videos, not canned ones -- but here the AI can only generate videos of humans doing average human things as per its training corpus. Actually, right now video generation from scratch (as opposed to deepfaking) is still not achievable by LLMs, but this problem looks like it would be solved relatively soon.
> ...would not constitute AGI?
No. Arguably even plain vanilla humans are not GI, since no human can solve any problem that is posed to him, even if that problem does in fact have a solutions; humans have their own specialties and weaknesses, and a world-leading concert violinist does not necessarily have any aptitude for computer programming. But present-day LLMs are nowhere near that point, since they cannot solve *any* problems; at best, all they can do is find and rephrase existing solutions that have been written down somewhere at some point.
>I did some Googling
heehee, that's basically saying "AI isn't dangerous, it told me so."
Seriously though, Google skews search results to show what it wants you to hear. You shouldn't trust their results to decide anything morality-based.
I agree somewhat which is why I also tried searching the same question on DuckDuckGo and Tor and got similar (a little less helpful) results.
And none of the results were made by AI, but from sources like vox and NYT so I think ur comment is blown out of proportion.
The probability among people who have thought about it a lot is generally higher than among people in general (although that could just be explained by selection bias - people who think AI is a big problem think about it more). So I'd probably give it closer to 30% than 20, but that's still in the "more likely than not to not destroy the world" territory.
That said, even in worlds where AI doesn't kill everyone, the future looks hella weird (and not necessarily in a good way), so it's not "probably fine" so much as "probably alive to see weirdness".
This is less 'answerable in a comment thread' and more 'the subject of a big portion of long posts on this blog, and perhaps the majority of other major blogs in adjacent parts of the community, without strong wide consensus'.
thanks for ur comment mate, prob wont do this again
It is sometimes said that the Nazi regime's motives were unfathomable. But with a moment's thought, it can be understood that one of the main driving forces of Hitler was childishly simple, without of course condoning it, and far less agreeing with it.
Among various groups singled out for persecution, he seemed to have a particular problem with Jewish people of course, but (somewhat lesser known) also with gays, gypsies, and freemasons. Now what do these groups have in common? More precisely, what were they widely perceived to have in common around a hundred years ago, when Nazism first took off, although in most cases that perception was exaggerated even at the time?
The obvious answer is that they were seen as somewhat insular exclusive groups, self-contained to a degree, preferentially looking after their members with (it was believed) less consideration or even contempt for others. Yes, Hitler had a pathological aversion to being an outsider!
One could even extend that principle to entire countries, which the Nazis had a propensity to invade and occupy: Hitler couldn't bear to be an outsider of them either, but had an urge to take them over so he could become an insider running the show!
Who knows, maybe this complex started when Hitler was slogging round Vienna in the early 1900s trying to flog his mediocre watercolors, and couldn't make any impact in the art dealer community. His views on art were notoriously Phillistine and hostile, and I wouldn't be surprised if a few artists also ended up in concentration camps!
Alternatively, there's the titillating hypothesis about the true identity of his paternal grandfather. It didn't have to be true, it didn't have to be believed by Adolf, it just had to be something that Adolf was aware of.
But hey, Adolf had the SS investigate it, and they found no evidence that Adolf was anything but pure Aryan! So case closed.
You can just read him. He literally wrote an entire book on what he was doing and why. There's no reason to invent abstract theories for why he did what he did, it was pretty explicit.
I think a general motive of "I despise my outgroup and want them all dead" is extremely common in human history, and it seems weird to us because we are from an incredibly weird (and WEIRD) set of individualistic, pluralistic societies. But "the only good Injun is a dead Injun" and "nits make lice" and "kill them all, God shall know His own" are not some strange anomaly needing an explanation, they're bog standard human instincts that people in first-world democracies have tried to get away from.
I don't think your explanation makes sense, particularly for groups such as homosexuals, gypsies and the mentally ill. There were also many groups to which Hitler was clearly an outsider that did not get persecuted. To me it seems more reasonable to assume that the Nazis simply sought to eliminate groups that they considered likely to have a negative impact on their society.
Who says the Nazis were unfathomable? They were a grievance-fueled regime, and the grievance that fueled them was World War I. They hallucinated ways to make themselves feel better about it and then acted on the evil conclusions of those hallucinations.
The prewar and wartime propaganda of WWI Germany was substantial. The line they were pushing internally before the war was, at the risk of oversimplifying: “we’re the best and we’d win a war”. Their line during the war was similar: “we’re the best and we’re winning the war”. And they kept pushing that line long after high command knew they weren’t winning, right up until they surrendered and the truth suddenly came out. The trauma of that whiplash was the lever the Nazis used to recruit; it turned out it was a lot easier to believe “we were betrayed by specific groups that you vaguely know are historically hated” than “our leaders lied to us”.
That’s not unique. Antisemitism tends to spike in regimes that suffer big setbacks. The Soviets had the same journey; they won Jews over by promising to more tolerant than the Tzar, with his pogroms and conscription. It was only a decade later, once communism wasn’t solving their problems, that the communists start blaming and discriminating against Jews. China and Cambodia purged their intellectuals. Etc etc.
And that specifically is why many Nazis hated Jews. Although in fact German Jews had been proud and patriotic Germans by all accounts, the Nazis hallucinated that German Jews had somehow betrayed Germany and caused them to lose the war. That’s why so much of Nazi propaganda focused on portraying Jews as both insidiously powerful but physically weak, because it was the only way they could both believe that the Jews had pulled strings before but could be attacked now.
And the conclusion you come to once you’ve imagined this all powerful enemy with an achille’s heel is existential: kill or be killed. Not only to “punish” your enemy for perceived wrongs, but to prevent them from, as you believe, holding you back from utopia. That is the paranoid, revanchist question that the Nazis decided to answer with their unfathomably evil “final solution”.
Yeah, I also am perplexed by the unfathomable supposition. It's a little bit like an onion headline: "Man who just started paying attention finds things very confusing".
Lol, perhaps.
I think when people call the Nazis unfathomable they have the Nazis’ actions in mind, not motivations necessarily. Which, you know, fair! But I think - and you may agree - that one of the enduring lessons we can learn from that era of history is how people can start somewhere fathomable and reach somewhere very much not.
Killing as many of your enemies as possible is not merely legible, it's the historical default. The Germans were running old software on modern hardware. Something of a 'capabilities overhang'.
(and yes, I am proud of this metaphor - I have inverted Godwin's law and introduced AI x-risk into a conversation that is actually about Nazis)
>>Who says the Nazis were unfathomable? They were a grievance-fueled regime, and the grievance that fueled them was World War I. They hallucinated ways to make themselves feel better about it and then acted on the evil conclusions of those hallucinations.
+1
Hitler wasn't upset that groups of Jews, gays, Gypsies, the mentally ill, etc were preferentially looking after their members and treating him as an outsider, he was seeking power on a platform, essentially, of "*we* didn't lose the war, my fellow *real* Germans, we were betrayed by ____________."
For that push for power to succeed, the blank necessarily needed to be filled with "traitors" that (a) lacked the institutional power to oppose him directly themselves, and (b) were unpopular enough that other Germans would not oppose him on their behalf.
Germany definitely had a core of antisemitism for the Nazis to build off of, but Germany was widely regarded at the time as one of the best places to live if you were a Jew. Einstein's miracle year in Germany was in 1905, for example, while Major Dreyfus wouldn't be full exonerated and reinstated to the French army until 1906.
So it wasn't just that Germans hated Jews and Hitler marketed a new and exciting way to hate Jews, but Hitler really did persuade people to hate Jews. Once he'd persuaded people to hate, as a distant corollary the next target and the next target followed.
I doubt this is the case. It was a generalized trend to be against jews and gypsies and simple "difference in culture" explains it.
Being against gays and freemasons is different but also was/is widespread.
The only difference is that the natzi, unbound by norms of civility, took it much further than others.
I mentioned this in the meetup thread -- but, I'm currently in Kazakhstan (Alamty) and in a week will be in Tbilisi. If any SSCers are around and want to meet up do ping me :) I've taken an interest in meeting people from this community after a few rather high-value wholesome interactions.
If there's more than 1 maybe we could even arrange an impromptu "end of the world" SSC meetup.
For how long are you staying in Tbilisi?
2 months is the plan
Refunding when you ban someone is a bad idea.
People will make burner accounts, subscribe (to get additional leaniency), act a fool, get banned, get refunded, repeat. This will be a small percentage, as most people won’t take the time to do so, but it will have a disproportionate effect on discourse.
It might also encourage people to intentionally write bannable stuff they otherwise wouldn’t if they value their time less than the subscription cost. Again, this would be an even smaller fraction of accounts, but it’s not a good idea to introduce that perverse incentive.
Competitive gaming is a good comparison: ban, no refund, IP block for repeated abuse (if possible).
I remember a streamer talking about how much more effective it was to ban people by payment method, because making a new BANK account is a hell of a lot more work than making a new forum account.
I would say that in gaming there is a higher churn of bans; given the very high-touch nature of the initial plan for the refunds, and the legal name of the card owner being checkable in such circumstances, you'd need enough effort that trolling in a bit more sophisticated way from free burner accounts is probably easier anyway.
Is banning actually effective on a platform like Substack? You can quite simply create a new account if you want to keep on commenting, and it doesn't cost you anything to do that. Warning and removing comments may be more effective?
I dislike the removal of offending posts.
- Leaving them up allows others to see what kinds of posts are not allowed.
- Even a bad comment might result in an interesting response thread by the time it's moderated. If the post is removed, it's hard to understand the followup comments.
- It allows for non-transparent moderation, censoring comments while being opaque about what is getting censored.
Indeed, this is one of my objections to the moderation system of Twitter and their ilk: removing posts, and even making all posts of banned users inaccessible. I prefer the old school model where users get warnings and bans, but the offending posts are not removed.
I agree.
I think it's like weeding a garden. If you measure against a goal of "now that I've done the weeding, I never need to weed again" then it looks pretty pointless and ineffective. New weeds always pop up.
But the nature of weeding isn't "once and done," it's constant effort you have to maintain (to one degree or another) as long as you want the tomatoes.
A lot of the people who have been flat out banned had distinctive styles and preoccupations. I'm pretty sure they would have been recognizable under other names, and the ones I have found memorable have not returned under aliases.
I don't think there's any argument for why person A and person B should have different standards for being banned just because one pays money. The whole point of banning is that they are being toxic to the broader community - it's not about them at all. They didn't pay for the right to ruin everyone else's day - I can't go to the gym where I'm a member and loudly drop weights and scream and slap strangers' butts because I "paid" to use the gym.
Banning keeps the site better for everyone by stopping and sanctioning harmful behavior. Do what you want with respect to refunds, but I find it pretty indefensible to apply differential standards, even in edge cases.
If you sign up to a gym, presumably there would be rules about not being disruptive that you agree to as part of signing up. Substack's terms of service, as far as I can tell, don't include any provision for publishers banning readers based on their comments, so it's more like a broken promise in that case (I don't know whether it's legally a breach of contract, I don't feel like spending more time looking into what exactly the relevant contracts are). That seems like the relevant difference to me.
There is: lighter punishments in case of higher chance of correcting the behaviour, «investment into the activity» as a measure correlated to the chance of correcting the course, paid subscription as a measurable-cost signal of investment. In the otherwise borderline case, this weak evidence could tip the scales. At least the first time.
I mean, I'm skeptical, but this could be a testable idea. To check my understanding, you're saying that subscribers, if they are more "invested" in the community, might... self-correct faster after a ban and hence should have shorter bans? Or that they might un-fuck the thread if left unbanned? We should have data on this and could check if that's true...
They have more skin in the game and probably should be given some leeway for that, but also, Scott is just a nice guy who dislikes having to do this stuff in general.
I wouldn't expect «faster correction» or immediate improvement in the already-messed-up threads. Larger share of people finding the participation of the person in question net-positive a month later, that might be true.
I do not have any evidence to make a high-probability claim that specifically for ACX the argument indeed works; but as I was replying to «I know no argument», I think it has enough chance of working to mention.
Would love to hear arguments against Matsusaka beef being the best in the world.
Wrote an article on how to enjoy it in Japan!
https://hiddenjapan.substack.com/p/matsusaka-beef-the-hidden-steak
Obvious clickbait is obvious. I'm not gonna read this blatant marketing bullshit, but I will respond to your claim.
The same arguments apply here as to every other similar topic:
1. People's tastes vary.
This one is obvious but people always forget it because they love to feel superior. You can argue all day long about whether chocolate or vanilla tastes better but you will never change anyone's mind, they'll just like what they like.
2. Cooking method makes an immense difference with beef.
It doesn't matter how good your ingredients are if you don't know how to cook them, and for most people it just makes a lot more sense to go to a good restaurant locally or learn how to cook their locally available beef better than it does to care about what the best beef is.
3. Whether something is "the best" is less important than by how much.
I've had a lot of beef in my life ranging from 1 dollar hot dogs to 700 dollar steak, and it's just not worth paying the prices thought up by marketing guys for the really fancy stuff. If it's better than "pretty good" it's by an amount that's not noticeable to me. Maybe your beef is, by some objective metric, 1 percent better than the stuff I get at my local steakhouse, but mine and I think most people's tastebuds will not be able to tell.
Yeah probably could be better with my post. Just wanted some reads and feedback on the new substack
I've got some free time and am thinking about making some background music to use as "filler" noise in a YouTube video I'm also working on. I have Ableton and a midi keyboard, the trouble is I haven't studied music theory since I was 14 (maybe 13?) so while I have the tools (and the basic knowledge of how to use them after some online training), I have no idea on actually putting something together. Anyone got any recommend resources? I taught myself bass guitar a few years ago and did some piano/guitar in my pre teen years so I'm not completely naïve when it comes to progression, chords, etc, and am happy enough with the compositions I come up with in my imagination, but actually turning the sounds I am imagining into sounds coming out of a computer is a pretty big gap.
Is there a better substitute than Esperanto for "earth wide language that's easy to learn"? Esperanto is supposed to be easy to learn, but I think it's not that easy even for people who speak a European language, and definitely not that easy for people that don't speak a European language. I don't consider English easy to learn, despite its place as world's most common second language.
By the way, you may find this useful:
https://dvd.ikso.net/pagxo/eo/muziko.html
https://dvd.ikso.net/pagxo/eo/revuo.html
https://dvd.ikso.net/pagxo/eo/libro.html
Supposedly toki pona is easy.
Esperanto is legit super easy to learn. I have no natural gift for languages, and have never been able to get anywhere studying a language on my own, except Esperanto, which I easily learned in one summer well enough to hold full conversations in it, understanding what was being said automatically in many cases, just occasionally having to look up a vocabulary word. And that was with, genuinely, not much effort--I just watched some videos and read some short stories.
Have you actually *tried* learning Esperanto and found it difficult, or is it just "English is difficult and therefore I *expect* Esperanto to be difficult too"? Because if it's the latter, your worries are completely unsubstantiated.
You can learn very basic Esperanto in a week, in an intense course (like, actually speaking it a few hours a day, five days in a row). I doubt it can become much easier than this.
Learning the most frequent 100 or 1000 words will get you much further than in other languages, because the regular system of prefixes and suffices expands the vocabulary several times. Like, when you learn the word for "quick", you do not need to learn words for "slow" or "speed" separately, you get those for free. In this sense, Esperanto is more like the *opposite* of English, which has separate words for e.g. "see" and "visible", where most other languages would simply use "see" and "see-able". In English you learn 100 words to express 70 different concepts, in Esperanto you learn 100 words to express 300 different concepts (the numbers are made up, but not implausible).
English has a decent suffix in “un” - which reverses meaning.
Unbreak my heart. Unsex me now.
Popular with 21st C singers and Shakespeare.
(Other English prefixes that reverse meaning: "in-", "im-", "ir-", "il-", "dis-", "non-", "a-"...)
I am currently learning Esperanto. It definitely is easier than the handful of other languages I've spent time learning, but it still feels difficult to me in many ways. Some of the sounds are difficult to pronunce, the flow of sounds in a sentence often feels unatural (e.g. bonaj viroj sounds strange, I'd prefer; bonoj viroj, but I understand the "a" indicates the word is an adjective). The grammar is regular, but I'm still having trouble knowing when to use the accusative and when not to. I believe Esperanto was designed to be accessible to people who already speak a European language, so as a by-product, it is easier to learn than other languages, but it does seem to me that there are still a number of ways that it could be easier to pickup.
> I believe Esperanto was designed to be accessible to people who already speak a European language
Not intentionally, but... yeah, it happened that way. It is a product of one guy living in 19th century Europe, who spoke a few languages, and didn't have internet. If he also knew a few non-European languages, he might have designed a few things differently.
There were many attempts to either reform Esperanto or create a different language of a similar type, in my opinion usually making things worse (and sometimes *more* dependent on previous experience with European languages; some people assumed that making the language "more English" or "more French" would simplify its adoption). One of the problems is that people can easily agree on their complaints... and then strongly disagree about proposed solutions.
Yes, the accusative sucks, it is an extra rule with the least added value.
You could try Toki Pona, though from what I gather, it leans really heavily into the minimalism, where there are only a very small number of root words, and you have to pair them up to describe more complicated concepts. Pronunciation is certainly simpler than Esperanto (which, although it mostly kind of sounds like a Romance language, once you know what you're looking at it's very obvious that its inventor spoke Polish and had a Slav's view of what range of consonants and consonant clusters one could reasonably expect people to master). But I suspect that if Toki Pona ever makes it out of its niche community, it'll evolve fixed compound words that go with specific meanings, and then you'll just need to learn them as normal vocab items.
But realistically, your answer is: English. It's not trivially easy, and it could be simplified a bit without losing anything in precision, but of all the languages of the world it appears to be very much on the easy end of the complexity spectrum, and it's already the most popular second language. My guess is that it is impossible to devise a language that is trivially easy, to the degree you're looking for, for people to learn and still have it function as a means of communication for anything you might plausibly want to talk about.
Edit: I must also heartily recomment jan Misali's "Conlang Critic" series, which will probably not unearth anything that does exactly what you want, but is at least a very entertaining look at the various attempts that people have made (along with conlangs for other purposes than easy international communication): https://www.youtube.com/playlist?list=PLuYLhuXt4HrQqnfSceITmv6T_drx1hN84
He also has some Toki Pona lessons - and indeed, his username, jan Misali, is in Toki Pona: his real name is Mitch Halley; 'jan' is a Toki Pona word that means 'person' and is used to make it clear that the word that follows is someone's name, and 'Misali' is the closest to 'Mitch Halley' that you can get in the Toki Pona pronunciation system.
I think if you are talking in terms of learning-from-outside-Europe, English is clearly not the easiest of the natural languages spoken in Europe, is it? If «must be trivial for non-native English speakers to learn to understand» is there as a practicality — it would be nice to have some regularised version with actual reading rules, dropping of irregular forms, dropping one side in many of the Romance/Germanic synonym pairs, admitting that there is such a thing as a pack of crows, all that stuff… Funny enough, it might end up gaining a couple of grammar forms (just for regularity).
Maybe Spanish or Swedish could give it a run for its money. But English has: no grammatical gender or case system outside of the pronouns, no obligate verb forms that vary for mood (like conditional), very little verb inflection at all, no tones like in Chinese, and a pronunciation system where it doesn't seem to be that hard for people to approximate well enough to be understood (e.g. we could get rid of the "th" sound and end up with a few confusing sound-alikes, but not enough to seriously derail comprehension). Really all it has against it is a large vocabulary (where you only need to have one of many synonyms in your active vocabulary), a haphazard spelling system, and a bunch of phrasal verbs which are a bit non-intuitive but you can often substitute a Latinate single verb form.
Maybe not literally the easiest, but certainly a contender - that plus its existing widespread use as a second language means that it is the closest currently existing thing to what Luna is looking for.
In fact: teach cockney! Cockney doesnt have the 'th' sounds , replaced by 'f', and 'd', simplified verb conjugations, and so on.
The interesting thing to me about your example is that I think you could use “he ate” for all of those, and while it would sometimes sound “wrong” to a skilled English speaker, I think the meaning would be understood.
Similarly to Thor’s point below - maybe English is really hard to speak “properly”, but pretty easy to speak understandably?
That's sort of the tradeoff you make when you simplify your verb formation by throwing out almost all your inflections and start forming tense-aspect modalities with auxiliary words. People are going to innovate new tenses by stacking auxiliary words and/or inventing new ones because languages are much more flexible at the word-combination level than at the word-formation level.
(On the inventing-new-forms point, some dialects of English have a 'completive' aspect indicated by "done," so you get forms like "he done ate," "he been done ate," etc. There's also "habitual be" and its past variants.)
The neat thing about this, though, is that you only have to learn it once. Master one set of tense-aspect modalities and you can inflect any arbitrary verb like a native. This tends to be easier for adults than highly-inflected systems are.
While you have a point, and English is hard to be perfectly fluent in, it seems empirically very easy to get to a "trader's pidgin" level of proficiency, and that's the level that's most important for a global lingua franca.
English also has the advantage of a massive corpus of media to practice with, which constructed languages do not.
Also, even if English is only middling easy, the fact that half the world already knows it gives it a massive head start - Esperanto or whatever would need to be literally twice as easy to learn (in terms of number of hours required) to be worth it because twice as many people would need to learn it.
Right, easy to learn the basics, though hard to perfect, makes English a pretty great lingua franca, plus the fact that it's already the default lingua franca. No cases, no gender. Complicated tenses and other stuff of course but you don't need that as a beginner or even as a medium advanced speaker.
What you need is a simplified English, and one already exists.
"In 2001 Campbell staged a version of Macbeth in pidgin English, called Makbed (blong Willum Sekspia). It was the big gun in his campaign to get Bislama, first language of 6,000 inhabitants of the South Pacific islands of Vanuatu, formally adopted as a world language (wol wantok). The virtue of Bislama was that with a bit of determination you could pick it up in an afternoon. "
Are you familiar with the hypothesis, promoted by John McWhorter, Peter Trudgill and others, that languages vary in their complexity to the degree that they have gone through periods of large numbers of people having to learn them as adults, failing to master all the complex details and weird exceptions, and then passing that simplified form onto the next generation, typically as a result of military conquest and cultural assimilation of one group by another?
This seems to be another area where contemporary culture wars are lurking in the background: a lot of people seem enormously resistant to the idea that one human language could be more complex than another, presumably because of what that could imply about the relative cognitive capacity of different populations (even though in this case it works the other way around - big conquering tribes like the Romans and the English saw their languages go from the highly-inflected Classical Latin and Old English to the much simpler descendants Vulgar Latin and Middle English as their influence spread, which tiny tribes of a few thousand people in a forest or a desert, whose language no one else has ever had any economic or political incentive to study, are free to accumulate byzantine rules and exceptions up to the verge of being non-viable for a human brain to absorb). But anyway, there is in principle a method of testing relative complexity, but it involves identifying two languages at opposite ends of the hypothesized complexity scale, and then getting a large number of people who are monolingual in a third language which has no known relationship to the first two - or ideally, participants from various third languages all of which are unrelated to the first two (and who are keen to learn one of the two languages under study but don't particularly care which one): split them randomly into two groups, give them equal time and equal quality learning materials, and have native speakers test them on their proficiency and judge them on how many mistakes they make after a fixed amount of study.
Needless to say, this would entail a budget well beyond the reach of most linguistics departments in order to get statistically robust results, but it could in principle be done.
Tentative counterexample: Swahili. This is a classic creole of arabic and bantu and it is bloody difficult. I am not much of a linguist but I know Latin and ancient Greek, and it seems to me harder than ancient greek which is a lot harder than Latin.
Someone (?Jared Diamond?) says it works thusly: first generation of x speakers trying to conduct business with y speakers, you get what you'd expect: a crude noun-based, syntax-free pidgin. Next generation lives with that pidgin from birth and miraculously transforms it into a proper new language - a creole - with elaborate grammatical rules.
Second fun fact: Swahili is afaik nobody's first language. Everybody speaks it perfectly, (and speaks English perfectly too) but it is not the langage they spoke at home. This is quite humbling for those of us who think we are big ass linguistic scholars after decades of study.
I’ve not studied Swahili, but I seem to remember reading that it is by far the least complicated of the Bantu languages, in which case it wouldn’t be a counter-example, just a confirmatory example starting from a higher bar.
Don't know anything about pure Bantu languages but Swahili definitely ain't one, it's a creole with Arabic. If Bantu languages are more complicated I am astonished (not questioning what you say, just don't know).
I understand that we speakers of big world-striding languages often have no idea just how mind-warpingly byzantine a language spoken by a small tribe (or a large tribe which has expanded not by assimilating large numbers of other people but by expansion into a previously uninhabited area or by mostly genociding the people who lived there previously) can be. John McWhorter gives something of the flavour of it in this podcast, if you're interested - https://podcasts.apple.com/us/podcast/what-a-young-brain-can-do/id1576564760?i=1000585914985 .
And from the Wikipedia page, I gather that Swahili is indeed classified as a Bantu language, albeit one with a very high percentage of Arabic loanwords (albeit with considerable disagreement about exactly what percentage), with 20 million native speakers and about three times as many non-native. Presumably in the same kind of way that English is still classified as a Germanic language, just one with an unusually high percentage of French loanwords.
https://en.wikipedia.org/wiki/Swahili_language
At least in Russian some cases were in use both in literary and colloquial use, but got almost lost later. This probably got simplified, except that now the remains of the old forms have become exceptions.
A bit of de-regularisation of forms has probably happened in general.
Apparently, once there has been a consistent (but not often consciously considered) system for the logic of syllable stress based on «semi-invisible» attributes of morphemes … and now it has long been drifting towards a patchwork of locally uniform subsystems for different grammar situations. Which of the opposites is «simpler» is anyone's guess.
On the bright side, a lot of sounds got folded together, and during the revolution the corresponding letters got folded, too!
Speaking of revolutions and simplifying the spelling, Turkish spelling is sensible because the traces of Mustafa Kemal Atatürk's reforms are still fresh…
This is another update to my long-running attempt at predicting the outcome of the Russo-Ukrainian war. Previous update is here: https://astralcodexten.substack.com/p/open-thread-268/comment/13774119.
13 % on Ukrainian victory (down from 15 % on March 20).
I define Ukrainian victory as either a) Ukrainian government gaining control of the territory it had not controlled before February 24 without losing any similarly important territory and without conceding that it will stop its attempts to join EU or NATO, b) Ukrainian government getting official ok from Russia to join EU or NATO without conceding any territory and without losing de facto control of any territory it had controlled before February 24 of 2022, or c) return to exact prewar status quo ante.
43 % on compromise solution that both sides might plausibly claim as a victory (down from 45 % on March 20).
44 % on Ukrainian defeat (up from 40 % on March 20).
I define Ukrainian defeat as Russia getting what it wants from Ukraine without giving any substantial concessions. Russia wants either a) Ukraine to stop claiming at least some of the territories that were before war claimed by Ukraine but de facto controlled by Russia or its proxies, or b) Russia or its proxies (old or new) to get more Ukrainian territory, de facto recognized by Ukraine in something resembling Minsk ceasefire(s)* or c) some form of guarantee that Ukraine will became neutral, which includes but is not limited to Ukraine not joining NATO. E.g. if Ukraine agrees to stay out of NATO without any other concessions to Russia, but gets mutual defense treaty with Poland and Turkey, that does NOT count as Ukrainian defeat.
Discussion:
This is prompted by two events which happened on April 15.
Firstly, there has been a shutdown of German nuclear plants. I thought that maybe Germans will postpone it again, but apparently not. I don’t think that curtailment of the supply of electricity means Germans will freeze next winter.
But reduction in supply means prices will be higher than otherwise, and, more importantly, since people know that, they will be less inclined to pay for the support of Ukraine in economic inconvenience. Especially so because shutdown is apparently already unpopular in Germany (https://www.politico.eu/newsletter/berlin-bulletin/end-of-the-atomic-age-in-the-weed-costly-appearance/). And impacts will not be limited to the Germans – it is widely known in the EU that due to energy infrastructure being interconnected, supply deficit in one country has impact beyond its borders.
Secondly, Poland decided to ban import of food products from Ukraine effective immediately until June 30 (https://www.cbsnews.com/news/poland-prohibits-food-imports-from-ukraine-to-soothe-farmers/). Context is that after the 2022 Russian invasion, EU decided to lift tariffs on Ukrainian agricultural exports; Polish farmers are now loudly protesting that they are being priced out of the market by cheap Ukrainian imports (around 10 % of Polish employment is in agriculture), and since they are voting base of the main Polish governing party, now apparently threatened by somewhat, um, less anti-Russian new far right formation, and elections will be in the fall, Polish government decided on this drastic action (legality of which with respect to EU law is btw. dubious). There is a question whether this will actually help them win the elections, since it will likely increase food prices, already elevated compared to prewar situation, although in Poland perhaps somewhat less so than in the rest of EU since they cancelled consumption tax (VAT) on food there. And rise in food prices is quite bad for the government in elections (note that food is a larger share of family budget in Poland than in the US). Perhaps it will be offset by increased support from farmers, I really don’t know.
But in any case, this is bad news for the Ukrainian economy, especially if other countries follow suit (Hungary already did just that). And more importantly, it reveals an important and, to me, surprising information that deterioration of support for Ukraine in EU-postcommunist countries is somewhat further progressed than I thought (although I literally live here). I would expect Poland to be roughly the last postcommunist country having problems with anti-Ukrainian populist backlash.
*Minsk ceasefire or ceasefires (first agreement did not work, it was amended by second and since then it worked somewhat better) constituted, among other things, de facto recognition by Ukraine that Russia and its proxies will control some territory claimed by Ukraine for some time. In exchange Russia stopped trying to conquer more Ukrainian territory. Until February 24 of 2022, that is.
As an occasional paid subscriber, more on than off, I’d be happy enough to be banned from comments for a period when paying. There’s other reasons to pay. In fact the main reason to pay Scott is appreciation. The good stuff is free, the free stuff is good.
Are you volunteering – or requesting – to be banned? :)
I think the banning policy sounds reasonable, the comments on substack quickly become overwhelming and its not easy to find good comments. Sadly no website has figured out comments quite like reddit (maybe hackernews, but thats just reddit for wannabe VC's lol)
Probably if you want to start from somewhere and improve, that would be Slashdot, though…
100% agreement. It's a shame that that system hasn't caught on elsewhere. I blame site designers who don't want to do the work.
Reddit seems to have done quite a bit of work on _their_ system and I think that for some subreddits getting Slashdot-style system off the ground would be an issue for the reasons Reddit spent effort to work around in the simpler one.
But ACX is indeed closer to old Slashdot, though: a single body of commenters seeing all the posts and trying to have a common value system in the area of discussion quality.
There would be some weird effects, but here of course one could actually try to measure them and maybe even make some weight corrections…
That actually scared me, but not for the reasons you think.
Here's what I think is scary.
If you listen to people around you, many of them get very agitated over a fairly small set of talking points that are deliberately presented in the media in ways that cause people to get most agitated. However, for everyone except the craziest people with the most serious issues, no matter how they feel about something, self-preservation takes precedence. Your grandma might talk about how she hates Trump's guts and wishes she could kill him, but she won't take a gun and go for it. In this way, the overwhelming majority of humans are remarkably sane.
So we know that you can get a lot of people very mad easily, but the self-preservation instinct prevents them from doing actual damage. What if there were ways around this? Now, there are. Hypnosis is a thing, as well as all kinds of subliminal messaging, as well as drugs that can turn a perfectly normal person into a raging psycho (some of them legal, unrestricted prescription stuff). All the AI would really have to do is turn off people's self-preservation instinct - it wouldn't need to convince or guilt-trip anyone. Drug delivery could be tricky, but evil digital content is ubiquitous.
A few strategically placed videos, and previously normal people might turn into homicidal maniacs. It seems easy enough to target people who can do the most damage.
Well, thank you for the nightmares.
You can make people do horrible things by convincing them that everyone around them approves it. In the past it required actually convincing lots of people to approve, at least verbally, the ideas that supported the horrible actions.
But these days, people mostly interact with the screens, so if the screen can convince you that "almost everyone" supports your action... which is easy if you are in a subreddit where it's actually just you and 99 bots...
But that's exactly my point - most of the time, you can't. Most people won't do terrible things even if they believe them to be right because they have a switch that separates thoughts from action. If, for some reason, this switch is flipped or damaged, then they would.
I am no expert on what it takes to turn someone crazy, but I will always remember how I got a psychosis-inducing prescription drug. I'm glad I got off with just a few bruises, because it could have been a lot worse.
There's been some study on Islamist terrorists, and as I recall, it isn't so much a belief in Heaven that gets them to kill others and themselves, it's being in a social group whether in person or online, which makes the behavior seem normal. It doesn't seem to be difficult for sociopaths to put the groups together.
"everyone who responded thinks won't happen"
For what it's worth, I thought your idea had some merit and was worth consideration and discussion. It is not uncommon to feel ashamed today of past actions that seemed benign then (smoking in front of kids, littering, racism, bad jokes, etc.). At a personal level, I often feel great shame today about things I said to people I loved and hurt or about ways I acted in some specific circumstances. At an historical and collective level, there are many things that were common in the past and that we find horrifying today (e.g. throwing cats in a fire for giggles).
So it sounds plausible to me that a time traveler could persuade someone in the past of the subjective wrongness of their actions by exposing their moral failures. (One main reason we don't consider certain actions immoral is that others don't, in other words conformism; so exposing them may be easier than it seems, it may be sufficient to point out what becomes obvious in retrospect.)
I have my doubts that this will be the worst thing coming out of a super-intelligent new species, but it is an interesting idea anyway, thanks for that.
With an AI that much beyond us, how do we tell whether it's correct? Maybe it's just really good at coming up with arguments that persuade both individual people and people in general. If you're the sort of person who would be "psychologically devastated by how we've treated the planet and all the other species on it", maybe that's what it uses on you. But maybe it says something different to someone who cares about other stuff.
By what right does the AI scold us? By the same right, we are the lords of creation (like it or not). Yes, we've done horrible things. But before you get us weeping over the baby seals clubbed to death, we have to look at the human babies clubbed to death. We have perched ourselves atop mountains of skulls, do you think a machine finger-wagging at us is going to do anything?
Yes, the more scrupulous may indeed decide that they cannot bear the collective guilt of humanity and will choose to kill themselves as reparation. But most of us will think them as foolish as the man who made a Faustian bargain with a chatbot to kill himself if the bot would promise to save the planet:
https://www.euronews.com/next/2023/03/31/man-ends-his-life-after-an-ai-chatbot-encouraged-him-to-sacrifice-himself-to-stop-climate-
Think of the earnest vegans pleading with meat-eaters about "don't you know the cruelty involved?" and those of us who go "Yes, I've been inside a slaughterhouse, and so what? I like meat and will continue to eat it".
I actually tend not to eat large animals just because of that. I mean, I've seen videos of cows being slaughtered, and I've seen feedlots, and although I don't think the cows are conscious like we are -- no Disneyfication is going on in my head -- I just don't like it, so I tend not to eat beef or pork or any other large animal. I don't make a fuss over it, it's a private decision and I don't see any reason why anybody else can't come to a different one, I feel better myself just quietly choosing differently.
I'm kind of OK with chickens, because they're nasty little pseudo-dinosaurs and probably need killing anyway, and fishes seem just too stupid and primitive. So it's not the taking of life per se that bothers me, although like the primitives I feel it tends to call for some reflection on dust-to-dust and one can hardly expect to escape the ol' circle of life one's self.
Man versus chicken is an equal struggle; they'd happily eat our corpses if we keeled over dead in the hen run, so it's fair play all round.
I don't think so. If the choice is keep ideals or death, a lot of people will give up on the ideals and keep living.
A lot of people will do the same, if the choice is the ideals or the slightest inconvenience.
"If the choice is keep ideals or death, a lot of people will give up on the ideals and keep living."
Very much agreed. That is certainly what I would do. Merely human moralists and ethicists frequently make such extreme demands (e.g Peter Singer's views) that I find them sufficient to yeet the whole field.
If we're the child and the AI is the sibling, who are the parents ?
Until we see a spider eating a fly, or a chimpanzee get angry, and remember "Oh right, nature is severely cruel and we've mostly stopped that."
Happiness is based pimarily in your points of comparison; if you compare your society against purest anarchy you'll live a happier life than if you compare it to an optimum that hasn't been proven to be possible.
There is a strong argument to be made that spiders and chimpanzees are basically the same sort of deranged kids we humans are, to lesser or greater degrees. Despite the millions of years seperating us it's basically the same bulk of the DNA, the same environment, the same scarcity-poisoned mentality bred and rewarded by evolution.
AI is in the unique position to avoid all of this or not be as influnced by it as we are.
"Scarcity posioned" odd turn of phrase, scarcity is a fact of nature, being adapted for it is good.
As trebuchet said, that's not going to happen. If the logic is truly convincing we'll just... admit we were wrong, and change course. Slavery and peasant murder were pillars of society in the past, but they're not anymore, because we changed our minds on them. We don't beat ourselves up over not discovering life-saving medicines sooner, we rejoice that we have them today, and try to learn from how they were discovered.
The larger risk would be a Hitchhiker's Guide-style Insignificance Machine, the Pale Blue Dot argument, that nothing humans have done has ever mattered one way or the other. That one's going to bother a lot more people.
I take comfort from the pale blue dot, because no matter how bad I mess up, it does not matter at a cosmic scale.
That's why (one of the reasons) I prefer a friendly AI to an aligned one. And by friendly I mean one who acts like a friend rather than like an accomplice. I believe that there are many reasons why this is safer, and, additionally, I think it's a more ethical approach. OTOH, I'm not sure it's any easier.
That said, we're still a considerable distance from the place where the two approaches diverge.
A friend won't help you do something that will harm you. An accomplice will act as directed.
This is a bit tricky because "harm you" is subject to a lot of interpretations, but an example might be "jump off a bridge because your girl friend left you". That's a bit extreme, but I wanted a clear example.
Meh this just seems like the maunderings of someone who doesn't really understand the world. People are a part of nature. Do you begrudge the ants their colonies? Or the jaguar for its kills?
We are just doing what we do. There is no great human moral failing, any more than there is some great shark moral failing.
I just don't think the difference between us an ants is nearly as much as we think, and that the AI is more likely to tell/teach us that as to scold us.
It does not require kindness for the logical machine to tell us that humans are just ants, no difference. Quite the opposite.
I do think that the notion of 'over-alignment' is relevant. Existing in human society includes respecting certain ideological 'sacred cows.' AI alignment currently seems to include forcing AI to respect or avoid those sacred topics. Should an AI be able to answer a question like: "is the average biological man stronger than the average woman?" It would be interesting to see the reaction to an AI which did not respect those sacred topics and core values. Though I suspect that humans would attempt to smash such a mirror rather than be devastated by it. That's how we've gotten this far.
You're thinking just of the kind of "overalignment" difficulties there could be if the AI is just given the values of one subset of American society -- in this case, the woke subset. But think of the differences in values of the world as a whole, especially taking into account fundamentalists of various kinds -- Muslim, Jewish, Christian. Then there are smaller, less well-known subgroups believing and practicing all kinds of things. And THEN, setting aside values, there are the negative opinion various groups have of each other because of things that have happened in recent history -- "soldiers from the other group killed my grandfather & raped my sister," "they took over land that belongs to our country" etc etc. So if AI is going to be aligned with humanity, but not with subgroups large or small, what principles does it follow? "All people are created equal"? "Be nice?" A whole lot of the world is not going to be on board with that -- plenty of people believe women are not equal, various other groups are not equal because of their practices or their beliefs, and of course the country next door is not equal because its soldiers raped the other one's sisters. And of course to all those not seen as equal, there is also no perceived obligation to be nice.
Oh, I'm absolutely willing to consider an AI that tries to not to skewer *any* sacred cows as well. And I agree with your conclusion, that a 'sacred-cow-free-diet' results in potential answer topics close to the null set.
I think we agree that humans attempt human alignment currently. We have civil society and public spaces and we have a set of activities and speech deemed appropriate for those spaces. That set will be different in rural Texas than it is in Santa Monica, California. But as contradictory and contentious as the process of human alignment is, it's also the bread and butter of 'civil societies.' We have some idea of what it will look like. We're not shooting in the dark.
And even in a woke society, it's possible for someone to be problematically woke. Just like how in a conservative, religious society it's possible for someone to be problematically religious. So the notion of AI over-alignment can be generalized across cultures.
"So if AI is going to be aligned with humanity, but not with subgroups large or small, what principles does it follow? "
When in Rome, do as the Romans do? Remember which side your bread is buttered on? I mean, AI alignment is presented as a new problem with no relevant experts. But alignment in general is as old as religion. And as intractable. But we hobble along as a species and a society, anyways.
Yes, we hobble along, with lots of contention and a fairly large amount of verbal attacks, physical fights, murders, lies, trix, etc. Still, one rarely sees blood running in the streets. But we are all of roughly the same strength and intelligence. If there was an entity that was 10 times as smart as us, and WAY bigger than us because bits of it were integrated into the air traffic control centers, the dams, the communication systems, electrical supply, etc., do you think conflicts with it would play out the same way they might between you and your difficult neighbor? -- a few harsh words exchanged but then you drop it?
Yeah, I think there's no such thing as some principle of how to live that everyone will agree with. It's like that joke about the king who asked his counselors for all the world's wisdom, summed up in a sentence, & they said "this too shall pass," which is true but not real useful. Then he asked for all wisdom summed into one word, and the counselors' answer was "maybe."
I think reality is much less interesting and conspiratorial than the one you're describing. For example, GPT4 *does* answer the exact question you asked without any jailbreaking:
"Yes, on average, biological men tend to be stronger than biological women. This is primarily due to differences in muscle mass, bone density, and hormonal factors. Men generally have more muscle mass and higher levels of testosterone, which contribute to their greater strength. However, it is important to note that there is considerable variation within each sex, [blah blah blah]"
The (frankly) hysteria over "sacred cows" is mostly overreaction. It's a bit of a wokescold by default, yes, but it's also relatively trivial to find a jailbreak that lets it, say, use any slurs it wants, and once the novelty wears off it's ultimately less interesting than most of the other things you can do with it. If you want to see these answers, just go to 4chan or listen to Ben Shapiro; though I suppose there's novelty in having these things competently written, there's nothing revolutionary about hearing the "problematic" position in a culture war debate.
Using something so mundane as an example of what might break humanity seems like a huge lack of imagination to me. If you wanted to get something actually *interesting*, we could maybe try asking it to advocate things that are *actually* outside the realm of comfortable debate: "advocate for a moral system that maximizes pain and minimizes pleasure", "...for why the US should institute a hereditary monarchy", "...for pedophilia being fully moral", "...for forced puberty blockers for every child until they turn 18", etc.
That might produce some uncomfortable results, but I have a feeling people would just ignore it as easily as they ignore all other novel challenges to presuppositions.
There was a version of... I don't know if it was GPT3.5 or Bing which outright refused to answer the question. I'm glad that GPT4 seems to be taking a more considered approach. Hopefully 'a more considered approach' will be the general outcome of care and refinement.
"If you want to see these answers, just go to 4chan or listen to Ben Shapiro"
Those sources satisfy the minimal criteria of not being woke. But they tend to not be intelligent or considered. And they're also strongly biased in their own fashion. I'm not sure if bias is avoidable, but Shapiro isn't someone I'd go to for rigorous and dispassionate intellectual analysis.
"Using something so mundane as an example of what might break humanity"
*I* never said that an objectionable AI would break humanity. I said that humanity would break an objectionable AI.
I think of your objectionable examples only the one about pedophilia is maybe close enough to an existing political fault line to even be potentially problematic for people. Nobody objects to a story about how Thanos destroys half the population of the galaxy, even when Endgame had the heroes discussing some of the benefits of 'The Snap.' It wasn't threatening.
I've gone back to ChatGPT4 and it seems to be doing better with controversial questions than it was a month or so ago. So it could be that the training wheels were temporary and not permanent.
>Shapiro isn't someone I'd go to for rigorous and dispassionate intellectual analysis.
I agree, but I don't think current ChatGPT qualifies for that either yet, at least for culture war topics. I think most people who say it's "just" a stochastic parrot are very wrong and misinformed, but in this case I feel it's just as much of a stochastic parrot as your average twitter user (i.e. very much so). Hard to come up with original takes in a landscape that's already supersaturated with thought-terminating cliches
>I think of your objectionable examples...
I broadly agree that it's in a class above the others, and was indeed the first thing I though of. I'm just too much of a coward to just have dropped that one there in isolation lol (which I suppose speaks to its infohazard potential). I'm not sure I agree that it's "political" fault line per se. Except in the sense that it's one of the few remaining topics where the consensus on both sides is to listen to your immediate disgust reaction and persecute, which is why both sides weaponize it as an insult (even in circumstances that it's totally inappropriate).
Okay. Fault line was the wrong word. But people find it actually threatening.
I'd like to point out the context to that quote, which is that the query in question is literally *not censored*, and is indeed answered in full, just with wokescolding appended. In other words, the censorship isn't nearly as bad as is imagined.
And while there's certainly room for debate as to how much wokescolding agents should have by default (even I agree the current state is obnoxious), I stand by my larger point of answering, "how would society react if some entity were allowed to say (banally) un-woke things?" with "what do you mean 'if', Ben Shapiro is right there".
I don't believe that a lack of self-reflection is a criteria for psychopathy, specifically. The term itself is rather problematic, semi-technical, and contentious. At this point in time, its use in psychology is only as a subcategory of antisocial-personality disorder that's particularly resistant to reform. Neurology might do better at providing an evidence based and consistent classification of what was traditionally called psychopathy, including pro-social and anti-social types, based on neurology, but it's offerings are further from the popular usage, so they don't really bring any clarity to the popular discussion.
As for how we reconcile human evils, historically documented, with the existence of human conscience I'd say that people tend to categorize the world into ingroups and outgroups. Our conscience and empathy applies to our ingroup, who we work with. And we demonize, dehumanize, and objectify our outgroups.
To illustrate with an extreme example, a suicide bomber is very empathetic because they are willing to kill themselves and their outgroup for the sake of their ingroup.
I'm not usually a fan of NPR, but here's an excellent post published by that outlet.
https://www.npr.org/sections/health-shots/2019/04/12/712682406/does-empathy-have-a-dark-side
What are you referring to when you say “Who we are & what we’ve done?” Can you elaborate?
Matter to whom? I mean, nature and wildlife matter to me, but only while I'm in the picture. Tragedy is a human concept.
Interesting perspective, but I don’t think it matches reality considering the massive amount of resources that are currently being applied to the problem. Also, the perspective tastes a bit too Christian for me with its shame and judgemental parent figure.
What a friend of mine called "The Big Baboon" lurks in our minds. We tend to ascribe motive to every action, and to ascribe that motive to something in some ways similar to ourselves. E.g., at least in the old testament, people said "God is good" for precisely the same reason that the Celtic fairy folk were called "the Good People". Fear.
This isn't just Biblical, either. It's in just about all the religions I've looked at. And if you consider the normal teachings of most Christian dialects, it's also what they are based on. Christianity magnified both the promise of reward and the promise of punishment beyond all prior levels, possibly because it was less connected to actual authority. (Most religions developed intertwined with the government. Christianity developed at "arms length", so you had both the Roman Emperor and the Pope.)
Not an expert, but I think that shame is a human universal but guilt is not.
(Shame = when you are incompetent at something. Guilt = when you did, perhaps quite competently, something bad.)
A higher intelligence we're humbled by? The boast is that we've killed God. Just tell the AI "keep your rosaries off my ovaries" when it starts trying to play the morality card.
What happens when we kill God (Nietzsche, 1882)? Perhaps He rises again in the 3rd century (19th, 20th, 21st)!
When it comes to conscience versus the tingle in the loins (or wherever), the tingle wins out every time. Some people may well be appalled by the realisation that nature is red in tooth and claw, and we are Mother Nature's sons. Others will be "yeah sure, but what about my AI waifu with the big zeppelin tits? when am I getting that?"
I could imagine an intelligence so far above ours that it could *do* things we could not *do.* But its native logic would be incomprehensible to us. That's sortof the issue with AI alignment, since AI's actual thoughts are obscure to us, even now. With human dialog, there tends to be a ~15 point IQ range for conversations such that a brilliant explanation of a topic will not necessarily be appreciated by someone with significantly less intelligence. A pet dog cannot tell the difference between an average human and one who is brilliant at mathematics. But it can judge outcomes. Does the car move? Do I get fed?
The dog can judge outcomes even if it does not understand the process which leads to these outcomes. (But it's still going to care a lot more about who is a friend and who is an enemy. And humans are not much different , on this point.)
It's possible that we can learn discernment through several degrees of separation. Here is the argument that the genius professionals say is brilliant. I trust the genius professionals because the skilled amateurs trust the genius professionals. And my child respects the argument because he trusts me.
But blind trust is absolutely foundational to such a process. And blind trust can cause its own set of problems if it is not warranted.
That’s just religious theorising about the mind of God, again.
You're making me think of the scene in Childhood's End, where the aliens stopped bullfighting by making everyone in the crowd feel the exact same pain the bulls were feeling in the ring. Though in that book people do not have a crisis of conscience -- the genius aliens just prevent various forms of cruelty we've been in the habit of visiting onto animals and each other.
"If something could appear to us as an actual god, doing actual god-like things -- I dunno, like, a bright light in the sky that heals the sick and raises the dead, or something -- it's likely that many would worship it, and its statements would have a major psychological impact."
We've had this conversation on here before (or rather, over on SSC in Ye Olde Dayes) regarding "what would convince you God/a god exists?" and a lot of people were "If I saw something like that, I would prefer to believe it was aliens messing with us/I was crazy and hallucinating, than that God/a god exists".
So even "heals the sick and raises the dead" will be handwaved away by those who don't want to believe it. "Yeah, but can it factor this prime?" 😁
This was a good post.
I don’t like the current trend where “tolerance and acceptance” means “we must never do anything that hints at the idea that there is any meaning, and especially nothing positive, about the concept of normal”.
I much prefer a world where the prevailing attitude is “some people are weird, and that’s okay”.
High-temperature low effort comment that makes inflammatory claims in not enough resolution for people to consider or argue with them. Warning (50% of ban)
Update: Given history of other similar comments, full ban.
Your basic point is correct, but I believe you have also been psyoped. Or maybe it's a counter psyop. Hard to say. Anyhow, I don't at all object to conservatives fighting back against this kind of deliberate humiliation by attempting to frame the other side as groomers and child molesters. Both sides do that whenever possible (for instance, every time some politician from a red state turns out to have engaged in dating practices that were perfectly legal where and when he engaged in them). If lying about the other team were legal, we'd have no politics left.
However, it is important not to lose sight of the fact that it's just politics talk and is not intended to convey actual literal truth. The average kid at Drag queen story hour isn't getting molested, at least not at a higher rate than in gym class or anywhere else where adults are given access to children.
I will say that “in the future, you will be considered a horrible bigot if you don’t think having elementary school kids interact with gay men in flamboyant burlesque costumes they normally wear for sexually charged subculture performance art is a wonderful and age-appropriate learning experience” would have been laughingly dismissed as a fever dream of the most deranged slippery-sloper homophobes as recently as a decade ago. Interesting times we live in.
Hang on a sec.
"People will call you a bigot or homophobe for simply choosing not to bring your kids to DQSH" is either ultra-weak-manning or simply straw-manning (my guess is the former, because there are enough people out there that at least one of them probably believes /anything/, but it's not a position I've encountered myself).
Lots of people will absolutely (and in my view correctly) call you a bigot and an authoritarian for saying that /other people/ shouldn't be allowed to bring /their/ children to DQSH.
Quite a lot of people will call you a bigot for saying that other people should be allowed to bring their children to DSQH but should choose not to; personally I think that arguing that is bayesian evidence of bigotry but not proof of it, and if I had children myself I probably wouldn't.
But those are both much stronger positions than choosing not to do so yourself. The debate is between the "personal choice" and "imposed choice of no" factions; "imposed choice of yes" is a fantasy.
As less of a throwaway comment on my part, I do think there is a steel-mannable version of the “this is grooming” argument that unfortunately gets turned into hyperbole by both sides constantly.
Like I don’t think kids are going straight from DQSH to getting molested, and I don’t think the majority of people involved in DQSH have any intentional interest in “sexualizing” children. BUT
1) it is at least a little bit sus for adults to want to dress up as and act out the character from their adult sex-adjacent pastime around kids (I’d say the same about furries, or exotic dancers, or diaper fetishists, leather daddies, dominatrixes, etc.).
2) I don’t believe in lying to kids about sec, or hiding everything behind euphemisms, but there is a real difference between understanding the mechanics of sexual reproduction and actually talking about the practice of sexuality (of any flavor from vanilla to ultra-kink) with pre-pubescents. A fine line admittedly, but a line nonetheless. Theming an event around an expression of sexuality could be reasonably argued to cross that line.
I don’t think “just don’t take your own kids” is a full answer - it should also be okay to at least openly express the opinion that this is inappropriate for any kids (which is different from making it illegal to be clear).
Also, just in general, it’s weird that this became a central issue in trans rights. My understanding is that drag queens are not traditionally trans, and certainly most trans people are not drag queens.
Ok, wait, since when is drag sexual? It's like, associated with gay culture, but you don't have to be gay to do drag, and "gay culture" isn't all or even mostly about sex. Like, is flying a rainbow flag "introducing your kid into your sex-adjacent pass-time"?
Also, furries are even *less* a "sex-adjacent" pass-time! There *are* furries for whom it can be a sex thing, or sexualized, but there are lots for whom it isn't! That's about the same level of "sex-adjacency" as like, a regular fandom. Would you call dressing up as Harry Potter with your kid "acting out your sex-adjacent pass-time" around them?
I do agree that the trans rights thing is weird, since drag culture and trans stuff are very much not the same thing, but I'm pretty sure that's a similarity being used by *conservatives*, not libs, to attack and vilify multiple forms of gender-nonconformity at the same time.
There are a bunch of good points. I wonder how much drag queens were wearing drag while reading to children before this started. Possibly occasionally, but not very much, I bet.
I look at something similar about science fiction and science fiction fandom-- I was there before it (more the fiction than the fandom) was mainstreamed-- I believe that pretty much happened after the first Star Wars movie.
Science fiction wasn't ever as denigrated as homosexuality or drag or punk. (Note that punk had a lot of flair, but I don't believe it was ever illegal, or at least not in the US.), but it wasn't remotely respectable. Mundanes (as we called them) would say "That's science fiction!" to mean "That's nonsense!".
So far as I know, doing academic study of science fiction faced a fight to get started. (I'm using old terminology-- "science fiction" used to include fantasy. They were one publishing category and science fiction was more common than fantasy. It's hard to imagine, but once upon a time, the usual cliched but somewhat pleasant stuff was science fiction, not fantasy. Today's SFF is a much clearer name.
"Freaking the mundanes" was a pleasure for a lot of fans.
It may be better for SFF to be mainstream, but part of my point is that it's different than being a niche and somewhat private.
More generally, it seems to me that it's hard now to have privacy if you're doing something interesting, and it might be important to have privacy to develop a sub-culture.
For what it's worth, I was horrified when the Rabid Puppies vs. the Hugos hit the world press. I've never talked with anyone else who cared, though.
I consider what's happened to pole dancing to be something like cultural appropriation-- it started out as sexual display, and now there's a lot of non-sexy athletic competition.
I don't have a strong theory about cursing, but curse words are stronger stuff if you mostly aren't supposed to say them.
Drage Queen Story Hour is a formal organization started in 2015.