I've always been slightly confused by articles about Jessica Mulroney, marvelling at how easy it is to change one's appearance so radically just with some judiciously applied makeup. But now I realise it's because I have been mixing her up with Dylan Mulvaney of Budweiser fame! Something to mull over methinks.
Does anyone know why Kalamazoo and Numazu are sister cities? Did they just choose each other because of the name similarity? Orr was that pure coincidence?
I have to say, it's amusing how the guy just leaves the board open while recovering from kidney donation and it turns into a giant Israel-Palestine argument.
It only takes a person to two to make everything go down in flames.
On the other hand, appointing a censor or two might solve the problem. (Not sure if Substack allows it, though.) Some who in Scott's absence would have the authority to say "stop discussing topic X for one week" and could give week-long bans to anyone who keeps talking regardless. Just until Scott returns and sorts things out.
Thread for publicly sharing anonymized information about the Open AI board members, since I suspect many readers here to have various Open AI connections.
If I were hoping to find it in my stocking, I'd have already bought it for myself. The stocking is for things I wouldn't think of, but family and friends might.
Been kicking around an idea/observation over the tg holiday.
Call it the “hate coefficient.” This board tends to prioritize verifiable evidence. But in this regard, internet, crowd-sourced argument presents a vulnerability. Tom the very-motivated racist, communist, anti-fascist, Palestine-hater or Israel-hater or what-have-you has a near inexhaustible capacity to dredge twitter, Wikipedia, Facebook, telegram, or whatever source is necessary to find evidence, however specious, for his preferred conclusion. In a debate, then, predisposition and motivated reasoning can transmute themselves into a endless barrage of “evidence” for how the blacks are the most hateful race, or how Russians are the real defenders in Ukraine or what have you.
One should never discount evidence, to be sure, but a disinterested third party trading study for study ends up at a surprising disadvantage- if I care a fair bit about not slandering an entire race, but Tom hates the Jews or the blacks or the French a capital-L “Lot,” for every item of evidence I’m willing to find and present, he’s more than willing and able to take as many hours of internet he needs to find and present 2.
The internet being the inexhaustible font of garbage that it is, a sufficiently motivated reasoner can easily drown a debate- not by actual preponderance of evidence, but by “preponderance of evidence I’m willing to find.” A sufficiently motivated flat-earther can just keep digging and throwing up links to the point that anyone contradicting him for the “sake of argument” becomes exhausted and calls it a day.
Which can leave the public square looking like “earth might be flat- tom’s evidence hasn’t been rebutted” even when the facts on the ground are more like “flat earth Tom threw so much garbage that no one had it in them to keep refuting it.”
At the same time, evidence matters. This phenomenon is real, but if you take it as license to ignore facts you don’t like, you’re blinding yourself. I guess you just have to take the grain of salt for very-opinionated-internet-man while also taking that same grain of salt for yourself when applying that label to him.
I don’t know. Reasoning is hard I guess. I wish I had a conclusion or a clear perspective but it seems like a prisoners’ dilemma we’re all stuck with, discounting by the hate coefficient as best we can.
"This board tends to prioritize verifiable evidence", followed by "A sufficiently motivated flat-earther can just keep digging and throwing up links to the point that anyone contradicting him for the sake of argument becomes exhausted and calls it a day"
The latter is called a "Gish Gallop". and a Gish Gallop is *not* verifiable evidence because its volume and ephemerality make verification practically impossible. I think most of this board can recognize that when they see it, and properly disengage from it. Which doesn't stop some people from trying it, but I don't recall seeing any of them have any great success here.
I actually like the idea of a Hate Coefficient. The higher the coeff, the greater the possibility that lots of evidence represents a gish gallop instead of truth.
Theoretically, establishing the hate coeff value shouldn't even be hard: if you disagree, that only proves that the coefficient should be high. If no one can even be bothered to argue the value, then clearly it's very low.
In practice I don't think it would stand up against enemy action or casual trolling.
But if becomes a thing then I can use Hate Coefficient as the name for my metal band, which is nice.
Bad evidence doesn't need rebutting, it rebuts itself. People who are convinced by bad evidence aren't worth trying to convince, because they'll be convinced by the next thing they read the second they walk away. So present good evidence, and then leave it alone.
Honestly, most people are not very bright, are easily confused, have hundreds of other things going on, and aren't numerate enough to pick apart bad data anyway. It's why political consultants focus on 'messaging' instead of data analysis.
It's often said that the term "Rationalist" is a poor choice because they're on the empiricist side, but I wonder how much that's actually true. The movement started in the late 2000s when New Atheism vs Creationism was the main war of the day, so there is all the obligatory lip science to The Power of Science and so on, but beyond that, Yudkowsky really seems to prefer rationalism over empiricism.
For example, in HPMOR, while Harry does do *some* experiments, they're unconnected to any of the benefits he gets. Harry's modus operandi is 1) Think about things and decide how the world *must* be based on intuition, 2) Believe *really* hard in your theory, 3) Be right because you're the author avatar and get rewarded with unique magical powers. (At least that's how he got Kill Dementor and Partial Transfiguration - the rest of his powers come from randomly getting OP magical artifacts dropped into his lap for no reason.)
Meanwhile, Yudkowsky's other classic writings seem to have a remarkable amount of contempt for actual scientists for someone ostensibly on the Pro Science side of the 2000s Religion Wars.
Meanwhile, nowdays in the Yudkowsky-derived AI Doomer movement, a common argument is that AI will be able to near-instantly take over the world because Intelligence means you can magically solve everything just by thinking really hard, no observations or legwork required. No this isn't a strawman, I've seen Doomers *explicitly* make this argument many times, as an argument about why AI takeoff shouldn't be constrained by the speed of running experiments and making observations about the world.
> It's often said that the term "Rationalist" is a poor choice because they're on the empiricist side
Said by people who are not aware that there are multiple traditional meanings of "rationalism".
> Harry's modus operandi is ... Be right because you're the author avatar and get rewarded with unique magical powers.
I think this is a very uncharitable perspective. Although Harry often represents author's beliefs, it is also often the case that Harry makes a mistake (and Dumbledore or Hermione tell him so). Yes, Harry makes a few good guesses. But the entire premise of the story is that Harry is special, for reasons related to Voldemort. Furthermore, it is assumed that the magical Britain is a small society isolated from mainstream humanity, where magic is high-status, and things that muggles do (including science) are low-status. So it's not just that Harry is smart (although he is), but because the others are not even trying (to seriously think about magic from the perspective of science). Partial Transfiguration = Transfiguration (known only to wizards) + Atomic Theory (known only to muggles, most of them don't think too hard about it).
> Think about things and decide how the world *must* be based on intuition
No no no. You seem to suggest that "empiricism" only means doing the experiments yourself. As opposed to e.g. learning from books written by scientists (who did the experiments themselves). Harry's advantage is not that he thinks too hard and figures out everything from first principles. His advantage is that he has already studied scientific books. He doesn't need to discover atoms, because he already knows that they exist. He only connects the dots ("if transfiguration can change objects... and atoms are objects..."). Connecting the dots of empirically verified findings is not a sin against empiricism. By that logic, Einstein also wouldn't qualify as an empiricist.
(Actually, there is a second, more subtle mistake. Empiricism doesn't necessarily require doing experiments. For example, you can figure out the orbits of planets by observation. Kepler didn't make his own experimental planets, and I would still call him an empiricist.)
> AI will be able to near-instantly take over the world because Intelligence means you can magically solve everything just by thinking really hard, no observations or legwork required.
You ignore the part about the AI escaping from the box. (Which is an obsolete argument, because no one is even trying to keep the AI in a box. It is more profitable to keep it connected to the internet.) No observation? We start by feeding it the entire internet, which includes millions of texts describing the observations we made. Why should the hypothetical superhuman AI not be capable of learning from our observations? No legwork required? Again, you missed the articles describing how an AI connected to the internet could simply ask some humans to do the work for it. (One AI already successfully convinced some people to help solve a captcha, pretending to be a blind human.)
The experiments and other measurements *we already made* probably contain a lot of information we failed to notice. Maybe we were not looking there (an experiment designed to verify a hypothesis X provides data for a different hypothesis Y), maybe we did the statistics wrong, or maybe the hypothesis appears more clearly when we put data from hundred different experiments together, or maybe to make the correct hypothesis would require knowledge of several different sciences put together. Therefore, once we make an IQ 200 AI and feed it the entire internet and Sci-Hub, one of the obvious first questions should be "which important conclusions of our experiments did we miss?". This is not a move against empiricism; it's just doing empiricism better.
> I think this is a very uncharitable perspective. Although Harry often represents author's beliefs, it is also often the case that Harry makes a mistake (and Dumbledore or Hermione tell him so). Yes, Harry makes a few good guesses. But the entire premise of the story is that Harry is special, for reasons related to Voldemort. Furthermore, it is assumed that the magical Britain is a small society isolated from mainstream humanity, where magic is high-status, and things that muggles do (including science) are low-status. So it's not just that Harry is smart (although he is), but because the others are not even trying (to seriously think about magic from the perspective of science). Partial Transfiguration = Transfiguration (known only to wizards) + Atomic Theory (known only to muggles, most of them don't think too hard about it).
Believe it or not, I used to be a fan of HPMOR, and I read the story several times through back in the day. I *know* all that. And I also know that none of that actually has to do with the issues I pointed out.
Harry didn't discover Partial Transfiguration or Kill Dementor due to being a Voldemort clone, since most obviously the real Voldemort never did, Nor is his muggle scientific knowledge relevant at all to the issues under discussion either, except in so far as him having heard of Timeless Physics was a prerequisite to be able to Guess The Author's Password in the first case. And for the dementor thing, you can't even say that.
And no, Partial Transfiguration was **very explicitly** *not* about "just Atomic Theory". It explicitly required him to believe very hard in "timeless physics", the author's own pet theory (which is incidentally *not* the mainstream view of physics). In both cases, it was literally just a case of Guess The Author's Password. He didn't do any science, he just believed really hard in a particular hypothesis and magically got rewarded for it.
Are there actually hundreds or thousands of people who self-identify as Rationalists, or is it just a term that refers to regular readers of Less Wrong?
I'm rat-adjacent. Seems like a good bunch of guys who try to actually figure out the truth and be intellectually rigorous, but I don't read LW or HPMOR and I have no clue what P(doom) is.
In most of Yud's writing or public speaking, he appears to (1) Hold a profound disdain for the intelligence and opinion of his reader/listener (2) Maintain a false Bond-villain-like sense of intellectual precognition, meaning he pretends to know my (== the reader/listener) arguments from the comfort of his armchair. Not only is this false and most of his simulated objections are strawman, his counters to those objections are themselves not convincing (3) Be an incredibly bad writer, with the 2 most salient of his bad writing habits being (a) long-winded and excruciatingly detailed defenses of obvious points or points that most of his intended audience could be safely assumed to know and agree with (b) bad/silly/condescending analogies.
If not for the fact of his autism, I would have long long ago put Yud in the same bucket of utter contempt that I put people like Elon Musk in, the people who are so thoroughly and irrevocably **impressed** with themselves that they simply can't pay attention to anyone but themselves and anything but their own voice. They are narcissists in a literal, Ancient Greek sense : they are infatuated with their own reflection, looking back at them in the form of grand-sounding shallow-meaning words and armies of fans clapping for those words. Yud comes very close to this archetype but doesn't quite fit in, he always appears clueless as to how arrogant he appears and it doesn't feel entirely fair to lump him with the rest.
As a contrast, consider Scott Alexander. (1) Through no less than - I estimate - perhaps 100K words of non-fiction I read for him, I have never detected a whiff of an effort to make me feel stupid or inferior in any way, on the contrary being very honest about his intellectual weak points at times (math, music) (2) (a) Never claims he knows what the imaginary opponent thinks, (b) all of the objections he raises and attributes to the opponent are links to their own words, followed by an interpretation of what those words mean and an explicit disclaimer that this interpretation could be wrong (c) sometimes lets opponents "have the last laugh" by acknowledging when something is value-laden or controversial and that 2 reasonably intelligent people can legitimately agree to disagree on (3) Is a decent writer in the average case, and a superb writer in the best case (I: https://slatestarcodex.com/2014/09/30/i-can-tolerate-anything-except-the-outgroup/, II: https://slatestarcodex.com/2017/03/24/guided-by-the-beauty-of-our-weapons/) (4) Unlike Yud or Scott Aaronson, he is fairly iconoclastic and willing to break the speed limit in the overton window of the day, meaning he doesn't give a shit about when his ingroup-adjacent outgroup deploys thought-stopping cliches or clutches its pearls
If you take Scott as the representative for Rationalism, it appears vastly more empirical than if you take Yud. Consider the sheer torrent of studies and RCTs cited in a post like Ivermectin : Much More, or, the start of his animosity with wokism, Reactionary Philosophy. This is generally one of the things that I skim in Scott's writing and frequently feel stupid if I try to read it carefully, because I'm not good with advanced statistics generally and empirical experiment setups both bore me to death and go over my head.
Other writers Scott has on his blog roll like the cluster of writers writing about Covid-19 (e.g. the one named Zeinab something) seem to share the trait. Less Wrongers can be a mixed bag.
But my point is that Yud is just an outlier. Most of the conclusions you can draw of Yud is not true of an average Rationalist or indeed a non-average leading one like Gwern or Scott.
Um, unlike EY, who's literally done nothing, Elon revolutionized EVs and space travel. You beclown yourself by pretending like he's the one who's a clown. Similarly, Scott's a blogger, routinely benefiting from MGM amnesia (when he writes about something you know, it's pretty clear Scott doesn't know much about it and it's just a commentator like Noonan, Krugman, Brooks, etc., with opinions generally not worthy of much deference). He's nowhere close to someone like Elon in impact, nowhere close in competency, even at chosen fields.
Meh, I don't subscribe much to the "Great Man" theory of history and technological progress. Some things are clearly wrought by great men, some physicists say that General Relativity is uniquely Einsteinian, but most aren't, and the few things that are tend to not matter much for the average person. Even if we grant the full premises of the Great Man theory of history, I believe there are so many humans in the modern age that Great Men are a dime a dozen, actually, and any combination of traits is out there somewhere, they are just starved of power/money/attention amidst all the hordes of other great men and ordinary men.
Even if I grant your premise that Elon is literally the George Washington of EVs and space travel, what does that have to do with the fact that he's an arrogant clown with bizarre actions ? I can hit wikipedia now and amuse you with the tales of any number of eccentric historical figures who were capable of great brilliance as easily as they are capable of immense stupidity. Maybe Elon revolutionized EVs and Space Travel, but the fact remains he's a crypto grifter and a stupid buyer of a social media corp that is now not worth 1/2 of what he paid. Two things can be true at the same time.
> He's [Scott] nowhere close to someone like Elon in impact, nowhere close in competency, even at chosen fields.
Impact can be argued for, even though it's a bit unfair to compare a fairly mature field like psychiatry to a nascent field with lots of low hanging fruits like commercial space travel. But how do you know competency ? Do you even have any benchmark for comparing 2 different sorts of competencies like Scott's and Elon's in an apples to apples fashion ? Do you know Scott's exact level of competency ?
I also hold a low opinion of EY, but he *is* the founder and original thought leader of the movement commonly called Rationalism, so it makes no sense to claim he is not representative. Maybe you could distinguish between different branches, like classical 2010 era Yudkowskianism rationalism and the more conservative and skeptical offshoot led by Scott Alexander, etc. But it seems like EY is still pretty popular on LW and in AI doomer circles, even if they don't necessarily agree with him 100%.
Thing is: an LLM (a LLM?) is an amalgam of human observations and legwork, so whereas we can choose between 'thinking' and observing/legwork, it seems an LLM doesn't do any 'thinking' which isn't data-rich (same could be true for humans - "nothing is in the intellect which has not been in the senses"). I suppose you're querying how a superintlligence might come up with original observations or experiments. But most original observations are surprising readings of existing data, and even with totally new experiences, perhaps a superintelligence is more likely to spot a black swan. I'm less clear how a superintelligence would organise an experiment.
I wouldn't describe LLM AIs as "thinking", not yet anyway. It's more like "pattern-matching". Is there a pattern in the organization of experiments, which is in the AI's training set? Then it's probably abstracted that pattern and can apply it to novel situations, but perhaps not as well as if it had been specifically trained to do that.
(Frankly, I think this is what humans do most of the time, too, and even a lot of what passes for us "thinking" is just us doing some pattern matching to what we think of as examples of thought.)
> Most original observations are surprising readings of existing data
I am curious where this comes from or what you mean by this exactly? My prior would not be that most original observation can be thought of as new surprising readings of existing data. Rather I would think original observations most of the time are derived from new data or information that becomes available, or old data that is read in a new context. Maybe this is what you mean - but this doesn't support a LLM suddenly getting huge new insights from existing data. Don't get me wrong, I absolutely think a new tool such as LLM's can shed new light on old data (they already do, after all), but I also think there are limits to what can be derived.
When I say surprising, I mean to other people - the person doing the research isn't trying to be revolutionary or whatever, just looking at the data very carefully and seeing something in it no-one has before
I'm just a layman so probably not thinking about this in a thoroughly joined up way, but I'm thinking about the sequences in particular Einstein's 'arrogance' in dealing with a question about Eddington's astronomy experiment. EY's point was that Einstein had already seen enough evidence to believe his theory and didn't need the experiment to confirm. So Einstein looks at the data, 'finds' relativity and that's it as far as he's concerned - if this is a fair description of scientific discovery, you could imagine an AI doing something similar, therefore not as dependent on experiment (whether existing LLMs can do it I don’t know)
Just a layman (in physics) too. But I think it is a reasonable description to say that Einstein looked at already discovered data and found a new and surprising observation
(relativity) - I just don't think this is how most new discoveries are actually made. EY logic that a superintelligence would find a lot more connections like this may be resonabe, but it is also quite possible that if Einstein had not discovered relativity somebody else would have, maybe just a bit later. And therefore most such "discoveries" purely from data has already been thought of. My bet is that there are some useful discoveries from such available data - but that many other things will require experiments. Another way to look at it is that purely theoretical (e.g. mathematics) discoveries can mainly be infered from data or from thinking hard - while engineering, and using that theory to do useful things in the real world require experimentation.
So to still use the Einstein anology - superintelligence could discover relativity really fast - but could not develop nuclear weapons fast - because developing nuclear weapons probably would require a lot of experimentation and development that you can't simply think your way around without any feedback. It certainly works that way for human intelligence.
>superintelligence could discover relativity really fast
And therefore be able to appreciate the amount of energy potetial in fission from e=mc2? afaik that is not useful at all in developing an actual bomb. From a brief look at wikipedia, the discovering leading to atomic bombs were made by cemists - presumably by doing experiments - so there you go...
I think the assumption that (I think) you implicitly make here is that all the necessary data that the AI needs to "take over the world" is readily available in a form the AI could use. I think this is false. Actual raw data from experiments are in fact often not published (at least in my field) and are often not readily available in a form the ai could use. The scientific papers describing experienmenta and data are what is available. Ofcourse a lot of information could be derived from that as well - but I think in practice experiments would be necessary also for a superintelligence to make such huge leaps as are suggested, e.g. inventing an army of nanobots.
Just to put my original post in context: I'm not an AI doomer but I think the weakness in the doomer position is probably an unforseen glass ceiling, either scarce resources or engineering constraints. But to the extent those constraints really can be overcome, it seems like it's all systems go for the superintelligence, and any dependence it has on data can soon be sorted out by gorging on all the data in the world, and if there are paywalls or security in place, it can learn to hack them. But I agree that is speculation which is why I'm not (yet) a doomer
I agree - I think there will be hard-to-workaround constraints for superintelligence, as there certainly is for humans. I'm an engineer, and I know how messy engineering can be - theory can only get you so far in my experience. My intuition is that a superintelligence - no matter how smart can't think it's way around everything. Even so it would certainly help to be really smart and to have the combined human knowledge available. Of course, I could be wrong, and we could be in huge trouble even if there are significant restraints.
Hi Leppi, there's a new open thread so probably time to wrap up, i justed wanted to say thanks for your posts and i may slightly downgrade my doomer position as a result, but it's already pretty low. I'll leave you with a flippant version of the ontological argument - if we can't conceive of a superintelligence that isn't dependent on experiments, perhaps we haven't yet conceived a true superintelligence. Not a great argument but it's all I've got!
Hi! Thanks for the discussion. I feel like pushing back a bit against what I perceive as hyperbole and figuratively extrapolating exponentials regarding AI (if that makes sense). Not saying that you represent that - but some people like EY do I think.
That being said, I think if we develop AGI and ASI it can for sure also be dangerous, and looking at AI risk is absolutely warranted.
It could also be argued that "rationalism vs empiricism" is one of those dumb historical philosophical conflicts which isn't really relevant today. Nobody really argues about it, they just learn about it in undergrad philosophy and have to pretend it's a sensible argument, but it's 2023 and we all fundamentally agree that the answer to whether knowledge should be obtained from reason or experiment is "Well yeah, both, obviously, depending".
Yeah, I think it's largely meaningless and irrelevant to the current naming debate, I just thought it was an interesting connection since people often criticize the name on this grounds.
"Rationalist" might be a bad name just because it sets some people off. I know a couple of otherwise intelligent people who apparently couldn't think about what rationalists might be like because they were stuck on the idea that people aren't rational, or at least not very rational.
One more thing about the intractability of the fight between Israel and Palestine: they aren't the only players. At a minimum, there's Iran supporting Palestinian aggression (possibly also Russia) and Americans millennialists who want to start the end of the world at Megiddo.
I'm not willing to go as far as supporting that people are hoping to end the world, but certainly some (the church I attend at least) believe that the existence of the Jewish people is important to the successful resolution of revalations, and that if they were wiped out, that could perhaps somehow cause issues. It's vague (because Revalations) but certainly they believe that helping the Jewish people maintain control of Israel is important for god's plan's successful conclusion.
I asked my facebook readers about this, and you can see a bunch of answers. Here's one of the better ones:
"John Hagee’s Christian Zionist organization, Christians United for Israel, has over 10 million members, which means that one Christian Zionist organization alone, not counting any other Christian Zionist orgs, has more members than there are Jews in the US (about 6 million, according to the Pew Research Center, and not all Jews are Zionists).
Academic Tristan Sturm estimates the number of Christian Zionists in the US at around 30 million — almost 10% of the total US population, and twice the worldwide Jewish population. That’s a large enough faction to influence US politics, and the US is a major contributor to Israel’s military efforts."
The question wasn't how many Christians support Israel, it's how many Christians support Israel because they want the world to end and think that Israeli control of Israel is somehow required for that.
>A study of evangelical Christian giving to Israeli nonprofits covering a longer time period – from 2008 through 2016 – identified 11 organizations donating an estimated total of $50 million to $65 million over the entire period...While this is less than 3% of all of the funds Israeli nonprofits obtained in foreign donations, we believe it’s worth watching this trend in part because the amounts grew in the period we reviewed.
3% of foreign donations from Evangelical Christians in particular (and probably more from other Christian denominations) is probably significant amount in absolute terms, but not the most significant in relative terms.
But Israel has every right to occupy Palestine, or any of their other neighbours who have attacked them. Just as the Allies had the right to occupy Germany and Japan after WW2 -- if you start a war with someone then they have every right to defeat you and occupy you. We didn't just beat the Nazis back within German borders and try to coexist with them, we wiped them the fuck out.
This occupation needs to last as long as it takes for the ideology which refuses to live in peace with its neighbours has been eradicated. In Germany, we de-Nazified the place and withdrew within four years and it worked out quite well. In Palestine it apparently hasn't worked so well; every time the Israelis withdraw they get attacked again. I don't know what the equivalent of deNazification in Palestine might look like, but it certainly hasn't happened yet.
Only resolutions by the UN Security Council are "binding", not resolutions (reccomendations) by the general assembly.
Besides, under Geneva convention which protects civilians and noncombatants Hamas murdering and abduction of Israeli citizens at october 7th plus the indiscriminate rocket attacks are super-illegal and immoral in the first place!
To only hold Israel accountable, but state state that hamas can murder children and take old grannys hostage is without consistency and motivated reasoning by you.
There's an article that is making the rounds of the rat blogosphere that I think is seriously wrong. You've probably seen it quoted. It blames the ALARA (as low as reasonably achievable) radiation protection standard for all the economic problems of US nuclear power. From https://worksinprogress.co/issue/taming-the-stars/:
"ALARA is defined as: "making every reasonable effort to maintain exposures to radiation as far below the dose limits in this part as is practical consistent with the purpose for which the licensed activity is undertaken, taking into account the state of technology, the economics of improvements in relation to state of technology, the economics of improvements in relation to benefits to the public health and safety, and other societal and socioeconomic considerations, and in relation to utilization of nuclear energy and licensed materials in the public interest." [footnote citing 10 CFR 20.1003]
As currently applied to nuclear power, ALARA literally means that every expense must be spent on eliminating every possible effect of nuclear power, at least until the resulting electricity is no cheaper than what the market pays for electricity generated from non-nuclear sources. Since standards cannot ratchet downwards, only up, safety standards that are just about affordable at the top of energy price spikes get entrenched, meaning that nuclear is made unaffordable until the next price hike – which makes it even more expensive, since it prevents learning and the economies of scale that a steady pipeline of projects can allow. ALARA, as currently applied in the US and much of the rest of the developed world, means that nuclear power is never allowed to be cheaper, no matter how much safer and cleaner it is than other sources of energy. It makes affordable, safe nuclear energy impossible, and forces us to rely on much less safe energy sources instead." End quote.
The first paragraph is a literal quote from the regulations. Everything after that? where the author tells you what ALARA "literally means"? is wrong. At least, I think so. To the extent I understand the claim being made here.
Is the author saying that nuclear regulations actually change in response to energy prices? This absolutely does not happen. Is he saying that inspection standards or radiation protection procedures change with energy prices? So that some regulator or energy company employee is actually making the decision to increase radiation protection standards when they observe nuclear becoming cheaper compared to non-nuclear energy? Highly implausible. Energy prices change all the time, and regulations/inspection procedures/radiation protection procedures are only changed in a slow and cumbersome way. Also, industry would have no incentive to make itself less competitive, and it is very much NRC culture to NOT pay attention to energy prices.*
Okay, maybe the author is making a more general claim that the level of safety/security regulation increases over time, it's a one-way ratchet and regulation prevents nuclear power from being as cheap as it arguably should be compared to other energy sources. A fair but unoriginal claim. But then why the talk about ALARA?
First of all, understand that ALARA is about radiation protection. It is not the be-all and end-all of nuclear regulation. The ALARA standard adds on to other radiation dose regulations. For example, a typical nuclear power plant worker can get a max of 5 rem per year of occupational radiation exposure (10 CFR 20.1201) AND their radiation dose must be ALARA. So if a worker gets more than 5 rem, it's a violation of both regulations. If a worker gets less than 5 rem but the plant does not make reasonable effort to make the dose ALARA, it could be a violation of the ALARA standard. Conclusion...even if the ALARA standard didn't exist, nuclear plants would have to put significant effort into radiation protection, albeit not quite so much.
I'm not gonna say ALARA is unimportant. But it's only one of a whole host of regulations that apply to nuclear power plant design, construction, operation, and decommissioning. There are regulations that apply to nuclear security, reducing and mitigating the risk of nuclear accidents, emergency planning, environmental protection, and I could go on. There would be a significant regulatory burden even without ALARA.
Maybe the author is using ALARA as shorthand for the entire group of US regulations and laws relevant to nuclear? Or the entire regulatory mindset? But, if your argument is that nuclear regulation should incorporate cost considerations, why pick on one of the regulations that explicitly incorporates consideration of cost, instead of the many that don't consider cost at all?
Another quote: "[T]he components that are not safety critical are still subject to a gold plated ALARA standard. This means the same component is regulated differently depending on whether it is in a coal plant or a nuclear plant, even if it is far away from the reactor and cannot affect it."
False. The reason that a component in a nuclear plant is regulated differently from a component in a coal plant is that different laws, regulations, and administrative agencies regulate nuclear plants from those that regulate coal plants. ALARA has absolutely nothing to do with that.
I hate to be all argument from authority, but I notice the author, John Myers, seems to be a UK YIMBY activist and if he has any experience in US nuclear, I'm not aware of it. Please understand that the statement "ALARA, as currently applied in the US and much of the rest of the developed world, means that nuclear power is never allowed to be cheaper, no matter how much safer and cleaner it is than other sources of energy" is false. That is not what ALARA means. ALARA is not that powerful. Please stop quoting this guy uncritically.
*Because NRC's mission is to ensure nuclear safety and security, not to ensure that the US nuclear power industry is economically viable. If you want something to complain about, ask Congress to change that.
One of the biggest problems with this type of argument is that nuclear power is uneconomical everywhere, not just the US. Finding out about Flamanville 3 was a major update for me, especially since the nuclear fanboys often point to France as the place that got things right.
The reason they think there's a relationship between energy price and regulation amount is the word "reasonable", because they read "unreasonable" as mostly synonymous with "too expensive".
So, as safety features get cheaper or easier to implement, what constitutes "reasonable"grows more expansive.
For example, it may start out unreasonable to require everyone to wear hazmat suits all the time and also maintain a nuclear plant. But say that after 50 million dollars of work designing more and more passive safety features passive safety features instead of saying "we have met a reasonable standard" and stopping, someone goes "but wait! We could get even safer by forcing everyone to wear hazmat suits! That's just not as much of an imposition anymore and it only costs 10 million, which is still 40 million below the previous reasonable threshold!". This process repeats until the energy price of nuclear goes up, at which point someone can point out that safety beyond that margin is unreasonable.
I have some ideas about what Israel should be doing. I'm not sure whether I'm right, nor whether this is psychologically possible even if it is right.
I think Israel should be looking to its own borders and its own safety. Wrecking Gaza will not necessarily make Israel safer, and may be putting it at more risk. It's certainly creating more hatred for Israel and I gather there are Hamas leaders in other countries-- they aren't at personal risk from the attack on Gaza..
10/7 wasn't just an atrocity, it was an embarrassment. I assume the borders are getting more attention, but are they getting more thoughtful use of tech? Bulldozer-proof barriers?
Destroying Hamas' tunnels has some practical and humanitarian issues, but additionally, the attack was by air and sea as well as by underground.
As I understand it (discussion is welcome), Hamas' intent was to provoke Israel into a drastic reaction so the world would stop supporting Israel (maybe also to make it more likely for Moslem countries to attack Israel), so that Israel could be destroyed. It's a vile approach, but it might actually make some practical sense. I doubt that Israel will be destroyed, but I still think it would be bad if it were on the receiving end of a big attack.
A part which might not be psychologically possible is to quit abusing Palestinians. Torture and a lot of imprisonment might, oh maybe just might, have something to with why it was possible to keep such such tight security on the 10/7 attack. I'm not sure how many people were involved, but I'm expecting low thousands.
Maybe they *were* warned. I get the impression the Israeli government didn't want to believe such an attack was possible.
Meanwhile, Israeli military capacity is being spent on wrecking Gaza, and perhaps the most valuable thing being wasted is attention.
Just by the way, Netanyahu is staying in power while the attack on Gaza is going on. I'm not sure when the next possibility for getting him out of office is, though I'm betting he will be out. In any case, His incentives to continue the attack are personal as well as emotional.
Sidetrack: it may not be possible to get all the hostages back. I wouldn't be surprised if some of them are dead, and I've heard a plausible claim that some of them are being held by groups other than Hamas.
I've thought from the start that this is a crafty land grab, by Israel, and nothing I've seen since has changed that view, if anything the opposite. Urging Palestinians to move to south Gaza, and then shutting down all utilities in northern Gaza and trashing it ever since, to encourage the Pallies on their way, and then the IDF promptly occupying most of it, is all a bit of a giveaway that their aim is to annex at least Northern Gaza if not the whole lot.
The Israelis must have known about the Hamas plan in advance. For example, it was reported (although how reliably I don't know) that Egypt warned them about it some days previously. So, by not taking more precautions in anticipation of it, one can only assume the Israelis were willing to let it proceed to its full extent, so they would gain the sympathy and support necessary to invade Gaza in their turn.
Obviously it's unfortunate for the Israeli hostages, and innocent Palestinians come to that, but if the above supposition is true then the policy is evidently that regrettably they are expendible for a greater long term benefit to Israel. The Israelis may even be able to get most hostages back, as well as keeping northern Gaza, a double win!
Note that I am not criticizing Israel. Netanyahu seems like a true stateman, willing to make strategic decisions at the risk of his own popularity. In any case, Hamas itself brought all this on the Palestinians. Also, a big punch up was inevitable sooner or later anyway, due to the Palestinian population in Gaza increasing so rapidly.
Isreal has occupied Gaza before and could do so again any time they wanted to. This sort of 5d casus belli makes no sense in reality, even for say Pearl Harbor, let alone here..
Israel unilaterally abandoned Gaza in 2005, forcibly evacuating the whole Jewish population. They haven't occupied Gaza in the many wars that Hamas started by shooting rockets at Israel. Israel has offered Gaza to Egypt, but Egypt didn't want it. Why would Israel, or anyone in their right mind, want Gaza? And why would any Israeli officials want to go down in history as an epic failure by getting their acquaintances or relatives killed (*everyone* in Israel knows someone who died in the attack) just so they can get a small piece of land with no resources that's at best full of rubble, at worst full of Palestinians who want to murder them? You are suggesting a conspiracy theory that not only paints the Israeli government as cartoonishly evil--which is already a red flag--but as wildly irrational at the same time.
2005? That was nearly twenty years ago. As I mentioned, the Palestinian population in Gaza has been ballooning in recent years, and by now has probably almost doubled since then.
Regardless of what seemed the best option in 2005, a rapid, and likely continuing, exponential increase like that, on what you yourself call a "small piece of land with no resources" mandates an urgent change of policy before the rest of Israel is threatened to an existential extent.
Yes, there are roughly twice as many Palestinians in Gaza now as there were in 2005. That makes Gaza twice as unappealing a place for Israel to have anything to do with now than it was in 2005, and they went to a great deal of trouble to pull out of Gaza then.
Israel does not want Gaza. If it didn't have Palestinians all over it but were in its pristine natural form, sure, it would be worth something. But you could turn a Gaza-sized strip of the Negev Desert into a decent place to live easier than you could turn Gaza as it presently is into a decent place for Jews to live. Israel doesn't want it.
They might be stuck with it, though, because nobody else save Hamas seems to want it either.
John I don't know if you're based in the US, where in most areas people can be relaxed and choosy about land, because there is so much of it. But in a small country like Israel you can never have too much land, and every scrap is valuable, even if it is a barren dusty wasteland. With know-how and commitment it might not stay that way.
Coastal land is even more potentially valuable, for example as holiday resorts, with their tourist dollars, or desalination plants, or nuclear power stations with a handy and ample supply of cooling water.
Also, land isn't just about places to live or grow crops. Land is a military asset, and the more "hinterland" you have, even if uninhabited desert, the more time and elbow room there is to counter incursions. For example, that's why most ancient cities were founded a few miles up-river from the sea, to give some advance warning of sea-borne invasions and time to prepare!
If Gaza were a barren dusty wasteland, sure, Israel could do something with it and would probably try.
Gaza isn't a barren dusty wasteland, it's a war-torn city with a couple of million Palestinians all over it. The couple million Palestinians are a huge *negative* to Israel, one that far outweighs the value of a few hundred square kilometers of barren dusty wasteland and/or ruined city. And Israel is not going to ethnically cleanse Gaza of all those Palestinians, no matter what some people here like to claim. So, owning or occupying or administering Gaza is a negative for Israel.
Of course, living next to Hamas is *also* a negative for Israel, and 10/7 changed the calculus on which is the lesser evil. So I expect we will see Gaza under Israeli rule for the next few years. But as an instrumental goal, not a terminal one.
And Israel has been able to obtain that, if it wanted, since 1948 by killing or deporting all the people. It has never acted on its supposed desire. Even in the recent war, Israel is not expelling Palestinians from land it conquers. In what way is it a serious desire if Israel never acts on it?
Wow. That's definitely picking the situation up by the other corner.
I'm more inclined to believe in stupidity on the Israeli side rather than plotting, but I don't know what can be proven. The version I'm familiar with is that Egypt did warn them, but the Israeli government wanted to believe Palestinians had been mollified with jobs, and there hadn't been an attack for a while.
Would Israel want a land grab of utter wreckage in Palestine, possibly with extra attacks and terrorism? I don't think so, but it's hard to tell.
It's true that what's being done to drive people out of northern Gaza when they have no refuge anywhere is a disgrace, but was it intended from the start? What could be used for evidence?
It's not just unfortunate for the hostages, even if you have no sympathy for Palestinians. There are the 1200 dead and their families and friends, at least.
I was concerned about appearing sociopathic with my rather chilly analysis, but I should have remembered this is ACX.
What kind of deal would Hamas agree to? Its current charter is pretty clear that Hamas only views the 1967 borders as a starting point for a unified Palestine from the river to the sea. Its earlier charter was even more explicit.
Likewise, the remaining Arab states didn't signal their willingness to negotiate when they issued the Khartoum Resolution (also called the Three Nos: No peace, no negotiations, no recognition of Israel.)
If you really think that the Israelis are the impediment to peace, should they embrace the Khartoum Resolution and the Three Nos? Would that be a step in the correct direction?
Sure, so Israel should go back to its 1967 borders. Like how it was in 1966. Did Israel have peace with its neighbors in 1966?
Did they ever offer to make peace before 1967?
Because that would undermine the idea that Israel’s expansion is the impediment to peace. Hamas’ charter makes clear that Palestine has to go from the river to the sea - any Israeli border is an impediment to “peace” - with “peace” here meaning capitulation and an unconditional Arab victory.
Israel indeed opposes such a peace, just as Palestine doesn’t seem keen on unconditional surrender either.
So I’m interested in what you think the Arab states are willing to give up as part of a peace deal. The Khartoum Resolution doesn’t provide much to go off of, does it?
There is no reason to think the Israeli government wanted an attack on the scale of 10/7. As for how much they wanted to attack Gaza, it's hard to tell. They'd been going along for years without comprehensively bombarding Gaza, so maybe they didn't really want to.
It's all very well to talk about revealed preference, but you also need to estimate what hints about what people want might be relevant.
Considering the amount of backlash they created, maybe it wasn't a *well*-calculated loss.
At the bare minimum, the support they have among the USA's youth seem to be in a downward spiral. Those are the future congressmen and congresswomen that in 40 or so years they would have to bribe to get their yearly X billions in aid. That's not to mention Europe, which is geographically closer and influential with the US.
And for what, exactly ? What has the IDF concretely accomplished other than 12K dead, 1 million+ displaced, and a Northern Gaza full of rubble, destroyed armor, and Hamas ? Not to mention the economic havoc of $260 million down the drain a day and 350K Israeli diasporing outside Israel.
No long-term investment can be judged in 2 months, but Genocide in front of the camera looks bad for business.
Well let's check your first comment... where was it... ah yes.
>commit an atrocity so horrible that it could justify even genocide in retaliation<
That's not something that just goes away afterward, that's a scar in the public conscience for decades. That's a wound that keeps bleeding.
As is an invasion. It's been less than two months since this attack and people are already clamoring for Israel to calm down. How long were people complaining about Iraq and Afghanistan?
Just clicked through the "implicit association test" Scott referenced in his "Quests and Requests" post, and got a strong perception that I would get about the same bias given black/white colored squares instead of dark/light skinned people. I think in my mind, negative emotions are in some part defined as negations of positive emotions, and dark skin - as a negation of light skin. So it's natural that it's easier to hold positive<->positive association versus positive<->negative.
It's also a bias of sorts, but not _that_ kind of bias Scott was hinting at, it seems
In my opinion, the implicit association tests don't show racism at all. The just show associations. This is why black people also, on average, register as "racist" against black people on the tests. You could probably make an implicit association test show that people associate white people or soldiers with Nazis more than they associate other races or professions with Nazis. This doesn't mean people are racist against white people.
Racism may imply you have certain associations, but the reverse is not true.
What are examples of times and places when political change has been both fast and good? (Good in your opinion; fast with respect to typical political change throughout history.) Change directly related to the end of long wars, independence wars and the fall of the USSR don't count.
To be clear: it can be after a (non-independence) revolution, but not a time when things are much better simply because a time of peace has followed a time of war.
The Schleswig-Holstein question had a nice resolution. The lands had bounced back and forth between Denmark and the HRE/Prussia/Austria/Germany for centuries, and was thought of as an insoluble problem. When Germany lost WWI, the Allies let Denmark decide what to do, and to their credit they held a plebiscite. Problem solved. Not even Hitler changed the border.
Lord Palmerston: "Only three people have ever really understood the Schleswig-Holstein business: the Prince Consort, who is dead, a German professor, who has gone mad, and I, who have forgotten all about it."
Another instance of a border question included for comic relief:
"In 1984, Canadian soldiers visited the island and planted a Canadian flag, also leaving a bottle of Canadian whisky.[9] The Danish Minister of Greenland Affairs came to the island himself later the same year with the Danish flag, a bottle of Schnapps, and a letter stating "Welcome to the Danish Island" (Velkommen til den danske ø).[10][11][12] The two countries proceeded to take turns planting their flags on the island and exchanging alcoholic beverages. There have also been Google ads used to "promote their claims""
I'd nominate the split of Czechoslovakia into Czechia and Slovakia. It was negotiated and carried out entirely within a single calendar year; it was entirely peaceful; and it created two stable and culturally-coherent democracies. How many peaceful national divorces have ever even been attempted let alone quickly accomplished?
A quick google search tells me that Czechs are about 1% of the population in Slovakia and Slovaks are about 2% in the Czech Republic. Were the populations intermingled before the split or were the borders easy to draw according to the demographic distribution? If the latter, the ease of creating commonsense homogeneous nation-states might explain the relative painlessness of the divorce settlement.
AFAIK, Czechia is basically the old Holy Roman Empire regions of Bohemia and Moravia, while Slovakia was a Slavic country ruled by Hungary, the Ottomans, and the Austro-Hungarians, with maybe Poland in there somewhere for good measure. So I've had the impression that they were fairly distinct, like Austria-Hungary was.
The post-WWII treatment of the Axis powers by the Allies (including the Marshall plan) seems to fit. At least in the sense of "good" centered on none of Germany, Japan, or Italy invading anyone since then (that I recall). Corrections welcome! (yeah, it is a change related to the end of a long war, but it isn't _just_ the end of WWII. THe peace afterwards was managed much better than the aftermath of many (probably _most_) wars.)
It seems off to me to separate the postwar settlement from the war itself in answering this question. I don't think the specifics of those postwar rebuilding and rehabilitation programs could have been happened without their complete defeat in war. War is politics by other means and all that.
In fact, the defeat Japan was somewhat less complete than Germany, which may have been expedient but affected its "spiritual" rehabilitation, for the lack of a better word, and this has had lasting consequences with respect to its relations with its neighbours.
That's fair. I consider it useful to consider the postwar settlement special, mostly because it was remarkably successful in comparison to many other postwar settlements - even in cases where the end of the war appeared to be equally decisive.
Many Thanks! Interesting! So there was an analog to project Paperclip internal to West Germany?
"But it's also empirically true that there's no evidence they were plotting a nazi coup or future wars, because the military balance of power and overall economic conditions had changed so much that they genuinely gravitated towards a mostly democratic ideology within the framework of being a US client state."
Compared to most outcomes of most wars, I'd count that as a success.
Many Thanks! It is amazing that the outcome was as favorable as it wound up being, particularly since the forces driving the loosening of the process were schedule pressure and manpower limits rather than any careful calculation. That Germany wound up neither as permanently resentful as after WWI nor reverting to Nazi rule seems like amazingly good luck. This makes it clearer why so many other postwar outcomes were so dismal.
I think you are probably grinding an axe here, but I honestly can't tell which one.
The years of occupation were certainly part of what the Allies did, and I assume that they were part of why the Axis powers were turned into nations that everyone could live with. In that sense, it worked, while similar attempts by the USA more recently have failed e.g. in Afghanistan.
I'm guessing that "re-education of those savage barbarians" is sarcastic, but I don't know what specific axe you are grinding here.
Is something false? What, specifically?
Is there something you don't like? What, specifically? And what would you have preferred as an alternative?
I don't agree that the Germans are uncivilized. What has been problematic in the past is that they tend to be more earnest and enthusiastic than most, in that having decided to do something, they go at it hammer and tongs and sometimes don't know when to stop! Of course that need not be a Bad Thing, and is usually quite the opposite.
If you or I were preparing an encyclopedia of chemistry, for example, we would probably be content with ten volumes. But a German professor wouldn't be satisfied with less than twenty. Actually, I think there is some scientific encyclopedia with seventy or more volumes, and the editors are inevitably - you guessed it - German! :-)
If we were drinking in a bar one evening, we'd probably have had enough after five pints. But a German drinking party would drink ten pints, then at 2am tickle their throats to honk up, after which they could start on another ten.
The transition in Spain comes to mind, which took place after the death of the dictator Franco and led to the establishment of a democracy within two years and a peaceful handover of power a few years after. It is probably most remarkable in that Franco appointed Juan Carlos as his successor and all signs pointed to a continuation of dictatorship. Instead he rapidly instituted a democracy and willingly gave up his powers.
Last year I read a book with a lot of stuff in it about Churchill during the war. There were accounts of him wandering around his residence in the night wearing outrageous get-ups. I have forgotten the details -- but some were women's clothiing, like maybe a lacy negligee, and some just absurd, like maybe. a clown suit. Also accounts of his champagne dinners attended by his staff, visiting dignitaries, etc. Churchill would sometimes lead the group in skipping in circles around the table, I believe with music playing.
Others who have read about these things -- how do you think of them? I know he was an alcoholic -- I know he was not crazy. Why did he do those things? Was there more tolerance then for eccentricities of this kind? Was it a way of demonstrating his self-confidence? -- like that he was so sure that he was admired and respected that he felt able to indulge his weirdest whiims in public? Was it a way of making fools of his dinner guests?
I remember reading an article that claimed Churchill deliberately cultivated a reputation for carefully-chosen socially acceptable vices because he felt it made other politicians more comfortable dealing with him. The focus of the article was on his reputation for heavy drinking, but eccentricities of private dress and behavior seems like it might be more of the same.
About his drinking in particular, the article talked about (citing statements by one of his daughters about him) that during his time as a cavalry officer in India, he got in the habit of drinking what his daughter called "Papa Cocktails" in the mornings, consisting of a big glass of water with a small splash of whiskey for flavor, which he'd nurse for several hours. So other people were seeing him drinking giant whiskey cocktails in the morning and assume he was consuming a lot more booze than the half a shot or so that was actually in the drink. And since heavy drinking was then considered a relatively harmless vice, he considered it useful to encourage the perception rather than correctly it.
By 'comfortable', do you (or the article) mean that the other politicians would have been less comfortable attempting horse-trading with people whom they perceived as puritanical? Or is this 'comfortable' in the more basic sense of, 'I feel like I can be myself around that old boozer?'
I'm not entirely sure, as it's been a while since I read it. But of those two, I think it was more of the latter. There was also an element that Churchill was obviously extremely talented and rose very high very quickly relatively early in his political career, becoming a cabinet minister at the age of 34 (his immediate predecessor and successor in that position, President of the Board of Trade, were 11 and 21 years older than him respectively) and being transferred to one of the most senior cabinet posts (Home Secretary) a couple years later, so having some visible flaws made him seem less threatening to his more senior coworkers.
If Supply & Demand is a thing, why does Black Friday exist?
If Black Friday is driven by a spike in demand, then I'd expect prices to grow rather than shrink. If Black Friday is driven by the supply side, wouldn't concentrating the costs of logistics/production into a single month make less money than smooth, continuous operations over the course of the year?
The common wisdom I've always received was: suppliers compete on price for business. But this just doesn't add up, to me.
I think the other answers miss price discrimination. Firms can make more money if they can sell goods for less to people who care about price and who therefore are willing to shop early, and those who do not care about price, or who are disorganized, and who are willing to pay more right before Christmas. If you put this on a supply and demand curve, it allows the stores to effectively create two supply curves to capture different parts of the demand curve. It's the same logic by which sales and coupons work in general.
Supply and demand explains how prices are set. It doesn't explain how demand or supply are generated. The reason prices don't rise on Black Friday is because while demand spikes it spikes predictably so businesses increase supply. Since there is an increase in supply and demand simultaneously the price does not change. Unless you're referring to why there are sales which is a different behavior but more related to competition and returns to scale.
No? The point of Black Friday is that it's the first day of the Christmas season so a lot of people go shopping. A lot of stores offer discounts to try and attract this business. But not all do so it's certainly not the whole point.
>wouldn't concentrating the costs of logistics/production into a single month make less money than smooth, continuous operations over the course of the year?<
I recently found out our store has to order its Halloween items in February. Smooth and continuous is a pipe dream.
I think if you wanted to properly model firm behaviour, you'd have to incorporate some Game Theory. However, rather than think about Black Friday as an outward shift in demand, think about it more as a temporary increase in the price elasticity of demand by consumers. Consumers aren't just looking to buy, they're specifically looking to buy great deals on Black Friday. They're also looking to buy for the Holiday season, so the demand of consumers will contract once their holiday shopping is over. Firms make pricing decisions based on both current demand and expected future demand, so even if demand shifts to the right on BF its not clear that the prices increase as a result. A lot of firms that offer discounts over BF don't compete in perfectly competitive markets
There's a two-way relationship between capitalist production realities and consumerist group-think:
1. Invent a shopping holiday on the basis of available statistical information and market the bejeezus out of it on the back (eg. front end) of the Christmas advertising push.
2. The idea is even more successful than ever imagined, becoming enshrined in public consciousness as an unofficial national holiday, *specifically* for the overburdened working class who likely won't have much time for Christmas shopping over the next month and whose mass media overexposure means they're more susceptible to broadly slathered marketing dollars spinning up FOMO anxiety.
3. Face the new reality: fake holiday has concentrated quarterly consumer purchasing into one catastrophic annual sales event. Now, even if it would be more cost-effective to spread out your operation, you've conditioned customers to 'wait for the sales'. Buckle up.
4. Do what you can to maximize margins inside the new status quo. Prices don't drop as much as advertisers would like you to think, and when they do it's a way to dump inventory before next year's re-up.
Retail businesses compete on profits and customers *over* competing on price. A low price is simply one of several ways to increase those first two metrics.
"If Black Friday is driven by a spike in demand, then I'd expect prices to grow rather than shrink."
Retailers and manufacturers know black Friday will happen, so they plan for supply to increase to meet the demand in advance.
One study found that only 2% were not available at the same price or cheaper within six months either side of the date. That's just one study, but it would make sense given TANSTAAFL.
I thought that a lot of stuff would show cheaper on ebay as people realized they'd impulsively bought things they didn't want, but apparently that doesn't happen.
In the mciroeconomic sense, 'supply' is 'the amount of a good a seller is willing to sell *at a given price*' and 'demand' is 'the amount of a good a buyer is willing to purchase *at a given price*'. Black Friday, like all limited time promotions, exist because there are buyers willing to buy most of what they want when they want at the 'normal' price, and other buyers who are only willing to buy at the offer price. By having time- (and often stock-)limited promotions, retailers reap the available profit from both, at the comparatively low cost of making a few 'coincidental' sales at the low price to people who would have bought high anyway.
Price discrimination would be my best explanation. Why do groceries have discounts on Tuesday or some other inconvenient day? Because that way they get to sell at slightly higher prices the rest of the week to price-insensitive customers, and still get to sell to the price-sensitive ones (who are willing to make an effort/deal with inconvenience in order to get a rebate). Same logic drives coupons, etc.
The Black Friday marketing ploy is "come stand in line starting at 5 am and you might get a cheaper TV than normal (limit 1 per family, while supplies last)". It's a great way to get some extra sales from price-sensitive customers without the to overall revenue that would come from just having lower prices in a normal way.
Gets into detail about Joan of Arc's trial being by secular authorities and lacking many guardrails that the Catholic Church required for heresy trials.
On the one hand, the Catholic Church wouldn't have had her killed, and I'm not sure it would have put her on trial for heresy at all. On the other hand, it's the Church that made heresy trials a serious matter, so I think it deserves some of the blame, though rather indirectly.
A spectacular essay about Joan of Arc, patron saint of Catholics who don't fit well in the Catholic Church, at least on the left side. It actually gave me a feeling of what's it's like to want a patron saint.
A spectacular essay about Joan of Arc, patron saint of Catholics who don't fit well in the Catholic Church, at least on the left side. It actually gave me a feeling of what's it's like to want a patron saint.
Reading those posts followed by the Wikipedia article on the Siege of Orleans was rather jarring. The tigerbeatdown post made it sound like Joan was a skilled military leader while the Wikipedia article repeatedly lists Joan urging foolish military attacks only to be overruled by the people who knew better, and her only actual contribution to lifting the siege was a giant morale boost.
Hmm I listened to the four part series about Joan of Arc on the history on fire podcast. The story Daniele tells in not the same as the above article. (Which sounds a bit... crumudgeony.) If you can get past his thick Italian accent, I found it worth listening to.
The first one, I didn't read the second. It's been a while since I listened to the podcast. I guess most of the facts are not that much in doubt, what is not known is the motivation of the people involved. OK the second sounds closer to the History on Fire podcast. There's been a ton written about Joan of Arc, and finding the truth amongst all those words is perhaps impossible, so people kinda make up the truth they want.
Seemingly there's now "Joan of Arc was trans" out there, but I don't know how much traction it has or if it's just a publicity stunt like a provincial English museum declaring Heliogabalus was trans:
"Elagabalus was trans" has been a thing for some time, and it's a fair conclusion if we take statements about Elagabalus by the Roman historian Cassius Dio at face value. Specifically, Cassius Dio describes Elagabalus as insisting on being addressed as "lady", as referring to him/herself as the mistress, wife, or queen of a male court favorite named Hierocles, and as trying to solicit surgeons to give him/her female genitalia.
Cassius Dio was a contemporary of Elagabalus, and was a high-level politician so he had access to quite a bit of good info about Elagabalus, but he was out of favor and mostly well away from the capital during Elagabalus's reign so he was relying mostly on second- and third-hand accounts rather than personal observation. Cassius Dio was also aligned with Elagabalus's political opponents and was restored to favor and high office after Elagabalus was assassinated and succeeded by Severus Alexander.
In light of this context, it's also defensible to conclude that Cassius Dio's characterization of Elagabalus was malicious gossip at best and consciously-perpetrated political libel at worst. Accusations of unmanliness were a common genre of Roman political insults, and Elagabalus was an easy target for such even if they were groundless for reasons of personal appearance (he was young, slight of build, and looks rather effeminate in contemporary depictions) and ethnicity (he was Syrian rather than Italian or Greek, and Syrians apparently were stereotyped by Romans as being effete and effeminate).
This is a persistent problem with pre-modern historiography: an awful lot of important stuff is sparsely documented, so we often have to rely on our choice of embellished narratives written a century or two after the fact (and filling in gaps in their own sources with supposition and guesswork) or one or two contemporaries who seem to be lying liars who lie through their lie-holes.
For example, by far our best contemporary source for the major political and military events of Justinian the Great's reign is General Belisarius's lawyer, Procopius, who was the sort of lawyer who would make Saul Goodman look honest. Procopius was far too well-placed and wrote far too much to be entirely disregarded, and where we can cross-check him he seems to be pretty reliable about the details of stuff like the movements of armies and the progress of public works projects, but we're pretty sure he was lying about how Justinian's body took demonic form at night and his head would fade in and out of existence, and that leads us to wonder how much we can trust him when he talks about the sexual escapades of Theodora and Antonina (the wifes of Justinian and Belisarius, respectively).
This is a really good point. There's something ironic about old slurs against the masculinity of political opponents being used to elevate those people 2000 years later as LGBT representation.
That said Hadrian actually was gay, and did a reasonably good job. Caesar was apparently bi, and his name now means 'emperor'. So there are actual role models. :)
Joan of Arc was an extreme tomboy. Redefining "tomboy" as "trans" is not good, and conspicuously opposed to that thing I *thought* we were doing where actual girls were allowed to wear pants, code, play sports, and do all the other traditional guy things if they wanted.
Usurping command of the armies of France is, of course, generally frowned upon regardless of gender. But we'll make an exception if God himself commands it.
I mean, in that era, you were a woman doing woman things or you were a man doing man things. Joan of Arc wanted to do man things, so she dressed up as one. If you transported her to this era, would she be a transman or an aspiring bossgirl? I don't know how you would begin to answer that. The further back you go the less sense our categories make, and we're going back 1000 years here.
Thinking about the fall-injury incentives thing from https://slatestarcodex.com/2016/11/10/book-review-house-of-god/ is it possible that somehow adjusting the basis on which medicaid and other government programs pay for dialysis would motivate existing medical providers to throw their weight behind reforms?
They do. There are models that take into account saturation effects (after a while, there is no increased buying for extra ad views), memory/decay of the ad effect, synergy with in-store promotions, seasonality, etc.
Then you can run your favourite optimisation process to maximise future ROI, based on weekly spend patterns.
(It's part of my day job to build models like these.)
Seems like that's a fair strategy for promoting a brand, and you do see things like that. Coca Cola doesn't need to advertise every day but they'll still do the occasional big campaign and product placement drives to make sure they don't fade from the public consciousness.
The term "oratrice mécanique d'analyse cardinale" has been trending in the meme-verse lately. It's the name of a device in the game Genshin Impact. I'm trying to figure out whether the name makes any sense in proper French.
A straightforward translation into English gets me "mechanical speaker of cardinal analysis," which doesn't make much sense, particularly that "cardinal" bit. But maybe there is more going on here than my high school French skills can manage.
Not a native but I think I speak enough French and Chinese to explain this. It means: "Mechanical Speaker of Cardinal Analysis" though due to French gender rules the speaker must be a woman or otherwise grammatically feminine.
This is an attempt to translate 谕示裁定枢机 which means something like 'Oracle Adjudicator Machine'. However, if you translate it very literally it means 'Tell Instruct Decide Certainty Door Machine'. Tell-instruct became oratrice (speaker) because it roughly means oracle. Machine became mechanical. Decide/Certainty became analysis. And then they translated 枢机 as cardinal because, for whatever reason, the word for Cardinal in Chinese is literally 'door machine'.
So basically a bad translator. I'd translate it as "machine de jugement oraculaire." Or maybe more poetically "les balances oraculaires."
One of the benefits of being multilingual is you learn how bad many translations are. A lot of it is petty differences too. I once remember a translation that translated "icy lake" as "very cold lake." And my thought was: why not just translate it literally? Obviously an English speaker understands that 'icy' implies 'cold' just as in the original. But no, 'very cold.'
A DDG search returned this reddit post [0], which claims the virality is just frenchies being proud of the in-game pronunciation, and also because the cadence is just poetically pleasing to the ear.
DDG also returned this article [1], which says the object is a conscious, mechanical weighing-scale which issues legal judgements. "Cardinal" is probably just a fancy way of saying "math".
>A straightforward translation into English gets me "mechanical speaker of cardinal analysis,"
That's correct, with a detail and a caveat:
-"Oratrice" translates to "speaker" or "orator", but of feminine genre.
-"Cardinale" can refer to a vast number of things depending on the field it's used in, and sometimes to multiple things in a single field. Considering the complete sentence looks really, really like japanese using gibberish european to look cool, I wouldn't expect them to have had any specific meaning in mind.
Sounds like just strung together a bunch of unlikely things. "Oratrice" is feminine, so it would be a female mechanical speake, I guess like Siri or Alexa. According to google cardinal analysis seems to be an obscure theory in economics.
Spinning with a drop spindle can be learned fairly quickly, but getting good at it takes time. But everyone who sees you doing it will think you're some sort of witch/wizard, which is pretty neat.
Getting good enough to impress people would probably take less time than getting your pilot's license.
Making things sort of fits. You only need to make one impressive thing and then just keep it around and new strangers will continue to be impressed by it, unlike other skills that might require continuous polishing.
People are impressed by the amount of poetry I know although I used to be able to learn a new poem very easily. That is no longer the case, the first evidence I noticed of memory decline with age.
Perhaps being able to tell for an arbitrary date on which weekday this was/will be?
With a bit of math affinity you can probably learn it in a day, and I guess a conversation "Oh, it's your birthday? How old are you? Oh, then you were born on a Tuesday." is pretty impressive to common folks.
I used to be able to astonish people by getting a salivary gland under my tongue to squirt saliva 2 or 3 feet. Somebody taught me to do it when I was a teen, and I just followed their directions and they worked. But I've never been able to teach anyone else how to do it. Everybody gets frustrated, and then some start just plain old spitting at their target. Sort of like in Harry Potter when some people trying to apparate for the first time pirouette and then deliberately leap out of their hoops.
I've been doing gymnastics and circus stuff for some years, although only on an amateur level. It is striking how much this concept shows up. Learning a backflip is actually surprisingly easy (for a relatively fit individual), especially into water. On the other hand learning to do a handstand requires a large and enduring effort. The backflip still induces more awe in people I would say. Among my friends we often joke that the backflip has the highest impressiveness-to-time-spent ratio. For the handstand it is also striking how one month spent on technique vs 2 years 'looks the same' to the uninitiated.
I think watching amateur circus stuff can give you some tips here. Although impressiveness is not one to one with entertainingness, a lot of the stuff they do on the scene do not require a lot of skill or practice, but often is simply daring or shameless.
Also Mike Boyd on youtube is a good source, his channel is only him learning stuff and recording how long it takes, and then you can decide for yourself how impressive things are.
I did gymnastics like 8 years ago and I could do both a backflip as well as a handstand. I’ve lost the ability to backflip but I can still walk on my hands fairly well funny enough
You can look up your nearest flying club and take a 60 minute discovery flight with an instructor. "I flew a plane this weekend" gets you a fair bit of undeserved admiration. The trick is to not actually pursue the license because flying will eat up all your money.
Even getting a private pilot's license is more impressive than it is difficult. I seem to remember hearing that people manage to do it in two or three weeks of full-time lessons and study.
There are intensive courses that can get you the license pretty quickly. It's more expensive than it is difficult, I'd say, though you do need to go through a fair bit of theory as well as the actual flying.
Yes, though that's mostly due to their taking drugs that aren't on the FAA's approved list. Which kind of has to be different than the FDA's, because the range of allowable side effects is different, but the FAA doesn't have the resources to investigate the entire modern pharmacopoeia. Psychiatric drugs are particularly problematic, for obvious reasons, and the FAA only recently put a few SSRIs on the approved list.
Of perhaps particular interest here, if you take the drug that requires you to lie to your doctor and say "I randomly fall asleep in the middle of the day", or the other drug that requires you to lie to your doctor and say "I often can't focus on important things that really need my attention", the FAA will quite understandably block you from acting as a pilot.
The Air Force has its own rules, different from the FAA or FDA, and they have their own doctors to guide them. And octogenarians are not categorically disallowed the way e.g. self-proclaimed narcoleptics are, because octogenarianism per se is not an impediment to safely flying an airplane. It is associated with a high risk of dangerous medical conditions, but that's what the regular medical exams are for - and I believe most pilots are screened out by the time they are 80.
This week I've been playing Slay The Princess, a visual novel that has been going mildly viral and getting outstanding reviews. The premise made it seem like it would be a recreation of AI box experiment: the princess is locked up in a cabin, your job is to slay her, and she will manipulate, threaten or seduce you to stay alive.
(mild spoilers below)
Well it turns out it was less of that and more of a Stanley Parable crossed with Disco Elysium (which I should get to playing sometime soon). The game is essentially a series of vignettes, some touching and some amusing, connected by branching paths. The full playthrough basically requires you to backtrack and re-make your choices, so you can't really play a role of a prudent gatekeeper. Or, well, you can, but it leads to a joke ending and credits roll.
Those who played it, would you like to share your favourite route? (and why is it Razor)
I like when philosophy games have a Message, a Point perhaps, something that they actually want the player to understand or think about, while Slay the Princess seems to be devoid of that, and just throws options at you. It's still nice, a lot of dialogue is amusing, and it's certainly creative, but it failed to make an impression on me as well
Over the past couple of weeks I've been seeing an increasing number of articles on the subject of battery systems for renewable energy becoming price competitive with gas-fired plants (example link below). Given that intermittency of renewable energy has been THE sticking point in regards to the energy transition, that seems like pretty big news. Can anyone with more experience or knowledge in the subject offer some insight as to what degree this is hype or if we're on the cusp of a genuine shift in the economics of power generation?
For personal home use, solar + batteries has been economically viable for a while (as in, it will reliably pay off in under 10 years, and often under 5 years depending on specifics / tax credits). This typically uses lithium ion or lithium lead batteries from China.
But for grid use, we're very far from having a viable solution to the duck curve problem caused by solar and wind, and lithium batteries are nowhere near the price for scale that we need.
Li-ion batteries are suboptimal for grid storage except inasmuch as old ones can be reused near their end of life. Until something like Vanadium flow batteries are available at scale it’s hard to see how battery storage displaces most gas plants (at least in the US).
I'd like to find a sperm donor whose sperm increases the chance of various desirable traits: health, IQ, talents, looks, etc. Additionally, its extremely important to me to minimize the chances of mental health issues - because the egg will likely bring some. Finally, I'd like to increase the chances for things such as values being close to mine and overall usefulness/success in life.
1) The best approach to all of the above seems to be to know someone's wide family. If there's no history of mental health issues X generations down and across many people, that sounds like reasonable probability. The same for other traits. (Yes, I'm thinking a bit in the vein of https://www.astralcodexten.com/p/secrets-of-the-great-families ).
2) Do any official institutions (spermbanks) offer anything similar? If yes, would you have tips? If not, why not? Are there regulatory issues or is there just such low demand / high stigma?
3) Can you think of a better way to find donors than just get tips on Wikipedia, on these forums and through chain emails sent to competent friends who know competent friends and then doing deep background checks on their families?
I have donated sperm (to LGBT couples, by refrigerated shipping in a transport buffer), and I have a full genome sequence available. I would consider myself to be intelligent (>99th percentile by standardized testing) and talented. If you are interested please email me at [my username]@protonmail.com
Also found a couple sperm banks that tell you whether donor has a degree beyond BA or BS, and one that does genetic testing of donors, thought not for genes thought to be related to intelligence.
In the past there were some sperm banks in California where you got some information about the donor -- I believe it was height, hair color, highest degree attained plus a statement the donor wrote. The most well-known one was California Cryo. They were willing to ship sperm in some kind of special tank that kept it cold. Don't know whether it still exists, or whether there is now any place where you get more information.
I know someone who placed an ad for a sperm donor on the campus of high-prestige university that happened to be near her, and about half a dozen student answered her ad. The ones she selected underwent screening for STD's, which the woman paid for. She did not tell the guys her name, and they were comfortable with that arrangement. I'm not sure what she paid them, but it wasn't a lot -- I think something like $100 per donation. (She did not have intercourse with them -- she gave them sterile containers to put the sperm in.) As far as I know, there was nothing illegal about any of this. I think there's a reasonable chance that most guys would answer honestly if you ask them about having suffered from serious mental illness. (I'd say things like anxiety and low-grade OCD and some depression after a relationship breakup really do not count. You want to know whether they have had a bipolar episode or been psychotic. Of course, undergrads and most grad students have not yet passed through the age of maximum risk for having a first episode.)
Also heard a rumor there was a "genius bank," selling the sperm of men who'd achieved at a high level. Don't know if it's true.
I think there are probably a number of sane, pleasant, smart men who would be willing to simply donate some sperm to you -- in the same spirit as Scott donated a kidney -- just to help out somebody in need with something they can spare. I'd try asking on here, actually, next time there's a classifieds. Wherever you ask, I recommend you offer to sign either a formal or informal document totally letting the man off the hook for any responsibility for the child. You might also want to man to agree never to contact the child and introduce himself, unless you'd be OK with that.
Oh, one other thing: I wouldn't bother with deep background checks. I'd say it would be enough to ask the man about whether he or any of his first degree relatives (parents and siblings) have any of the common conditions that are both heritable and really bad news to get: bipolar illness, schizophrenia . . . Look up what the most heritable serious ental and non-mental diseases are. I don't think looking for things like crimes and bankruptcies in the family history will get you much. For positive traits, you can ask about talents, life achievement and highest degree attained. I expect most people would give honest answers about these matters. It's not like they're going to get rich with this "job" you're offering.
"Sperm banks handle this through donor anonymity" Even if anonymity had not been challenged in court, I'm skeptical that it would be a long term solution. DNA sequencing has gotten remarkably cheap so I doubt that sperm donor anonymity can be permanently protected.
You may be right. I'm just guessing from the gradual general increase in surveillance that everyone's DNA will probably wind up in some database eventually.
What if the donor stays anonymous? Would that work? There's no reason for OP ever to learn his name or address or see his face, even if they have extensive conversations prior to the man donating. And once that's all settled, a friend of his can deliver the container of sperm to OP.
Why can't the donor and the recipient get around that prior to the donation by signing a child support agreement for a penny per year? Donor then deposits 18 pennies with the recipient.
My understanding is that courts consider child support to be a right of the child, not of the parent. Consequently, any such contract is worthless because the child did not agree to have its child support payments reduced by 99%. There's nothing stopping a parent from just not complaining about not receiving money, but if they change their mind after seeing how hard parenting is then no piece of paper is going to stop them.
I do not agree that the most intelligent and talented people are neurodivergent. Just hunted for info online. In general, people on the autism spectrum do *less* well than normals on IQ tests. When researchers limited their investigation to math ability in high-functioning people on the spectrum, i.e. kids diagnosed with Asperger's, some researchers find these kids are better at math than average, some found them not to be. While Aspies do not seem to be a lot better at math on average, it could work in the other direction, with a disproportionate number of mathematicians, chess wizards, etc. being on the spectrum. It may be true, it may be an urban myth. But in any case, there are a lot of fields in which one can be a genius: Biology, music composition, philosophy, writing . . . I've never heard people suggesting that genius in these other fields is associated with neurodivergence, and that has not been my observation in real life. Many people who had very high achievement in these other areas seem to have been sociable, flexible, and to lack that "system-builder" quality that's characteristic of people on the spectrum.
I have the following model: in order to be noticed for your genius (because you won a competition or are a renown professor or whatever), you usually need to be at least moderately world-savy / neurotypical. However, if you are absurdly intelligent / whatever, you may be able to coast on that and be noticed as a genius even without those other traits. Then a neurotypical person on the 90th percentile may be as notorious as a neuroatypical person on the 95th percentile (I'm making up the numbers). If this model holds, the neuroatypical people we laud as geniuses would be genuinely more intelligent / whatever than their fellow neurotypical geniuses, but that wouldn't mean being neuroatypical is an advantage.
Sticking out from the average in any way can make life hard. We are social animals, other kids sense something unusual about you, and you become the weird one. Plus, if you're unusually skilled at thinking, it's easy to lag behind the norm in other areas like emotional maturity and personal discipline, well into adulthood, which sucks pretty bad. Not speaking from experience or anything...
Look, it's a lot more complicated than Wikipedia thinks. Here are some of the complications:
(1)If you say someone is creative you can mean they're an artist of some kind, or you can mean they literally have high ideational fluency. The owner of a chain of drugstores can be quite creative with how he sets them up, or staffs them, or advertises them. If people in the arts have a higher rate of mental illness, you need to take into account the fact that it is extremely hard to make a living in the arts. It's a hard row to hoe. These people lead difficult lives. I have never seen a shred of evidence that people who simply have high ideational fluency -- people who can think of a lot of uses for a brick in 5 mins, who can come up with a clever, novel way to solve a puzzle -- have higher rates of mental illness. In fact I'd guess that in general they are more successful than their peers in non-arts professions, and lead easier, more gratifying lives. Being quick to think of original ideas is an advantage.
(2) In general, mental health and intelligence are positively correlated. But I'm not saying there's nothing in what you say. Here's what I think is a sophisticated, fair-minded summary:
"The persistent mad-genius controversy concerns whether creativity and psychopathology are positively or negatively correlated. Remarkably, the answer can be “both”! The debate has unfortunately overlooked the fact that the creativity-psychopathology correlation can be expressed as two independent propositions: (a) Among all creative individuals, the most creative are at higher risk for mental illness than are the less creative and (b) among all people, creative individuals exhibit better mental health than do noncreative individuals. In both propositions, creativity is defined by the production of one or more creative products that contribute to an established domain of achievement. Yet when the typical cross-sectional distribution of creative productivity is taken into account, these two statements can both be true. This potential compatibility is here christened the mad-genius paradox. This paradox can follow logically from the assumption that the distribution of creative productivity is approximated by an inverse power function called Lotka’s law. Even if psychopathology is specified to correlate positively with creative productivity, creators as a whole can still display appreciably less psychopathology than do people in the general population because the creative geniuses who are most at risk represent an extremely tiny proportion of those contributing to the domain. The hypothesized paradox has important scientific ramifications." From The Mad-Genius Paradox: Can Creative People Be More Mentally Healthy But Highly Creative People More Mentally Ill?
"The world is not that complex, reductionism works, intelligence is basically what matters, world optimization should be tried, all it takes is high agency people with the right values.
OR
The world is very complex, marginalism is what works, intelligence alone isn’t worth much, tacit knowledge and experience and tradition are valuable, smart people thinking they can optimize the world is hubris and inevitably leads to failure or worse."
Which do you think has more truth value? I think I'd go with 10/90 former/latter. A good response I saw says: "first one locally, second one globally".
"The world is complex, reductionism works, intelligence is basically what matters but intelligence alone is necessary but insufficient, world optimization should be tried, all it takes is high agency people with the right values but tacit knowledge and experience and tradition are still valuable."
I've seen broad questions, but this one is on a level of its own. In a couple sentences you've invoked philosophy of science, epistemology, complexity theory, economics, anthropology, history, ethics, and probably more.
I think the main word you're looking for is "emergent complexity". My quick answer is the good old middle way: both angles are important, and getting the right balance between them for the problem at hand is even more important.
When your low-level theory is good enough, reductionism works, but in a kind of hollow way. There is nothing about what a car does that could theoretically not be simulated at the level of fundamental particles and force fields. But there is a lot of information in a car that can only be understood at much higher levels of explanation. The fact that it's designed to fit humans, themselves produced by evolution and steeped in cultures. The need to not only successfully carry them places, but subtly make them feel powerful and safe. The assumption of a steady supply of refined hydrocarbons to burn, and paved roads to run on. The pressure for more fuel efficiency and lower emissions. The cultural and environmental constraints that make Americans buy huge cars and Japanese tiny ones. The memetic trends that produce preferences in color and shape, and so on and so forth. No amount of looking at the fundamental equations of physics could give you a hint that these things would appear.
The larger the ambition, the more you need to have a good level of knowledge of many of these levels. Not just theoretical knowledge, but the kind that internalizes as gut feelings, which means you're also enlisting the help of the huge part of your cognition whose inner workings are not visible to consciousness. But the world is complex, and there is a strong tend towards specialization, so any of us will probably seeing the whole picture through the partial angle of whatever layers we're most familiar with.
Bid advances come from the rare ability to reach up and down simultaneously. Turning sand to CPUs requires going down to the quantum level, but being able to sell those CPUs requires marketing which is basically applied mass psychology. At the highest level is the emergent behavior of large groups of humans. Human nature at scale is what made the green revolution a success, and communism a failure. Sometimes we go for grand goals, and hate the results. At every level there is uncertainty, you literally don't know until it's been tried.
Sam Kriss's article on René Girard makes the solid point that wide-ranging theorization has fallen out of fasion in the last century. "A century ago, intellectual life was dominated by brilliant, charismatic, but slightly daft theorists, people with intense tunnel vision, such as J. G. Frazer and Rudolf Steiner and Sigmund Freud. Today there are almost none of these thinkers, and the world feels poorer for it. Wouldn’t it be more interesting if we had hundreds of René Girards, each working away on their own vast theory of everything, interpreting all of history through one idiosyncratic insight?"
"The novice knows how things can go right, the expert knows how things can go wrong". I.e. the mechanics may be simple in hindsight, but the state-space and mechanism-space is often larger than you think. E.g. everyone thinks they understand Newton's 3 Laws of Motion until they see the Tennis Racket Effect. It's simple, reductionist, and completely bewildering.
I say: causal-reasoning for the well-understood, effectual-reasoning for the frontier.
P.S. what does "marginalism" mean in this context. scientific iteration? supply & demand? something else?
2) Reductionism works in the sense that it has been extremely successful as a research paradigm. That's not the same thing as believing that all phenomena can be explained solely by lower-level processes.
3) Intelligence is what matters. There's a reason we rule the world and chimps don't.
4) This one's a bit complicated. One the one hand, I do think there's a decent argument to be made that things like "tradition" and tacit knowledge represent a distributed information processing system that allows for solutions to local problem to be developed without requiring any one person to fully understand why they work. The problem it runs into is that it's essentially a Darwinian process, slowly building a homeostatic system in response to signals from the local environment. As as is well known, evolution does not and cannot plan for the future. It can only respond to conditions as they are, and when those conditions change too rapidly the result is usually organism death. So while we shouldn't be too quick to dismiss the potential knowledge to be found in tradition, we need to recognize that conditions which gave rise to it and which gave it it's adaptive function may no longer exist. "Trust, but verify".
As for whether smart people thinking they can optimize the world is hubris, of course it is. That doesn't mean we shouldn't attempt to do so. Sometimes, we really do make the world better.
Depends on the cultures we're comparing. As I said, I tend to think of tradition as a locally-optimized solution for a problem that a society has come to over time. So whether different cultures overlap in terms of their traditions would depend on the degree to which they've been exposed to similar problems and have independently arrived at similar solutions to it. I have no idea what the answer to that is or even if anyone has looked at that.
:- There are ideas that work that could be plausibly described as "reductionist" and/or "marginalist", but both terms are usually to vague to be helpful.
:- In most fields intelligence is valuable but soon hits diminishing returns and is far from sufficient, while knowledge and experience are vital and have much higher ceilings. In a few areas (e.g. pure maths) intelligence is much more important, but even there experience is easy to underrate.
:- Smart people thinking that they can optimise the world is hubris. To see this, look at all the things that humans disagree about, and observe that for most of those the correlation between which side you're on and measures of intelligence is very weak, and it's often pretty much zero. Conclude that for a lot of questions intelligence is clearly not sufficient for working out the correct answer.
Hmm, should I snark about the question before or after answering the question? I think... before. Ahem: As worded, 1 is the only viable answer; trying to answer 2 renders the dichotomy too reductionist to be able to answer 2. Seriously, how are you gonna go all-in on marginalism?
I'd say 50/50. The world's complexity depends on what you're trying to do with it. Experimentation is what works. Knowledge, experience and tradition are valuable in saving time on your experimentation, but cannot replace it. Trying to optimize often leads to failure, but failure can then lead to success.
I'd go with 40/60. The world is very complex, incentives are what works, intelligence matters a great deal, tacit knowledge and experience and tradition are somewhat valuable, smart people thinking they can optimize the world sometimes works and sometimes doesn't.
To give an example, look at medicine. We think of bloodletting as medieval, but it was common into the 19th century. This was one reason why homeopathy became popular: it was doing nothing while "real" medicine was hurting people. Smart people applying a simple idea, "medical interventions need to be tested against a placebo" beat the force of medical tradition which had been building up a massive body count for thousands of years. And it's not like people in the 12th century couldn't have done it our way because they lacked the tools to make the tools to make the tools. Comparison studies could have been conducted then, though they didn't have all our statistical tools, "eyeball the chart" would be better than what they were doing.
Just recently I heard a perfectly regular doctor give a plausible reason for bloodletting: too much iron in the blood is quite common, and bad for you. If the body cannot regulate it down, getting rid of some blood is the easiest fix. He suggested donating blood as the modern alternative. I haven't fact-checked any further, just repeating something I heard from a specialist who didn't look like he had axes to grind.
The latter, all the way. The former is just Top. Men. And even disregarding the not-great track record that has in the real world, it creates *huge* incentives to fake being one of those Top Men. And like all autocracy forms (and make no mistake, that kind of world-optimization plan relies on those at the top having the power to compel others to follow them, which is just as autocratic as a regular dictatorship), it quickly degenerates.
And there is not a single person I'd trust with that kind of power.
That's not to say that intelligence isn't valuable. It's morally flawed to do things you know are stupid; it's also morally flawed to give extra power to people prone to doing stupid things. But the diminishing returns to actually solving real problems or governing real systems are real, and kick in somewhere just above "normal". The world (or even any meaningful piece of it) is too large and too interconnected to be held in any one person's brain. Or even any finite set of people's brains. Much of it appears irrational, mainly because we can't see all the factors.
> it creates *huge* incentives to fake being one of those Top Men.
I thought the question was about how things actually work, not about what beliefs are more socially desirable. Isn't it one of the basic tenets of rationality to keep a clear distinction between these? There's already too much of "X can't be true because it would be bad if people believed it" out there in the world already...
First, I find the phrase "tenets of rationality" to be slightly...revealing. Religions have tenets.
The relevant part of the quote for that section was "world optimization should be tried, all it takes is high agency people with the right values." And that's a should-statement, not an is-statement. Overall the initial "dichotomy" (which I agree with others is more complex than that) was a mix of statements about present reality and statements about how we should structure society.
I’d put myself on the map as “The world is “””not that complex”””, reductionism works, political will is basically what matters, world optimization should be tried, all it takes is high agency people with the right values.” There’s no shortage of intelligence out there, there’s a shortage of cohesion and consensus, which basically are created by leaders who build them up.
I don't think so. My impression is that Burkean pro-tradition/experience conservatism (the second part of the dichotomy in a nutshell) is way more common on ACX than in the broader public or almost anywhere else. The median American is more "trust the experts or you're a dumb uneducated hick".
The median American trusts the subset of experts telling them what they think they already know and/or want to hear. This almost by definition results in marginalism, and the rest of the second package. Hnau has the gist of it.
I interpreted his post as saying that the American public should believe the first proposition more than it currently does (ie, should trust the experts more) and ACX posters should believe the second proposition more than it does (ie, the value of tradition and experience). The aggregate political result (slower change due to roughly 50/50 split between the political tribes) is besides the point.
I know that I've massively missed the bus on this, but is anyone else annoyed by the historical inaccuracy of calling the "rationalist community" as such? In the history of Western philosophy, the divide between rationalism and empiricism is one of the main splits, and the modern "rationalist" movement clearly falls on the empiricist side. Empiricism was about basing your view of the world on sense data, which is what modern "rationalism" does with its focus on Bayesian updating as the core means of knowledge acquisition. Meanwhile, actual historical rationalism held that if there was a conflict between your preconceived internal ideas about the world and your sense-based observations, instead of updating your internal ideas, you held that it was your senses that were wrong. This is how you got stuff like the Eleatics (who were essentially proto-rationalists) holding that change didn't exist despite change being observable at every moment of existence, or Leibniz holding that this was the best of all possible worlds despite all the easily observable evil in it. As you can see, this is the complete opposite of the epistemic system advocated by modern "rationalism". If I were to come up with a more accurate label for this movement, which I know it's much too late for, I'd call it Bayesian Empiricism, or maybe Neo-Empiricism. Anyways, that's my rant, I know it likely won't change anything but I had to get it out there.
There are several different traditional meanings of "rationalism", depending on whether we talk about philosophical rationalism, theological rationalism, political rationalism, etc. Specifically, the political rationalism refers to things such as rational choice, utilitarianism, secularism, which is quite similar to the values of the "rationalist community".
(There are even more confusing words out there; for example, according to Wikipedia, there are over 50 mutually contradictory meanings of "realism".)
Empiricism and Bayesianism are not precise either; they are just parts of the whole thing.
Basically I agree. But have you not noticed Scott regularly discount entire scientific papers because of strong priors? When your (ideally) Bayesian-derived knowledge gets solid enough, parts of it start looking like good old (proper) Rationalism. See for example "The control group is out of control".
I'm not the least bit annoyed when a community I sort of respect tells Western philosophy that it's been getting things wrong for most of its history. Rationalism is a poor name for what Descartes et al were talking about, and a pretty good name for what Yudkowsky et al are talking about.
It's true that "rational" is sort of a weasel word that can mean all sorts of things. Though, I can't think of a better (i.e. more specific?) name for Rationalism qua Descartes than what was historically chosen.
I try to mitigate the annoyance by saying "empiricism is rational, therefore empiricists are rationalists" and similar, but it doesn't really work
luckily the whole empiricism vs rationalism debate is hundreds of years obsoleted by now and only us history-of-philosophy nerds even notice the annoyance
Yes, I know others have been annoyed by this. I remember reading some major columnist (Ross Douthat maybe??) noting that internet rationalism has more in common with philosophical empiricism than philosophical rationalism.
Yes, I am sorry I was making a joke. Politicians often have to run for office and get votes amongst a lot of people that might not be that keen on some of the things you want to teach them.
I don't envy politicians much. It's really hard to communicate when everything you say will be reduced down to a four word sound bite and interpreted in the most hostile manner possible.
There's also the effect Paul Graham described where it is impossible to communicate more than 1 bit of information to a large audience.
As I said on the other comment, before you can do even this much, you'd need to explain that numbers actually refer to things external to people's opinions of them. For example, if I say there are 1.5 million cars in the world, and you say there are 1.5 billion, then it is not merely the case that our statements are substantially different -- it is also the case that we can actually go out and check. There actually exists the correct answer, regardless of who is on whose side in which political/religious/whatever debate.
Agreed! ( There is some unavoidable fuzz from corner cases. E.g. Does a car which has just been in a collision but has neither been assessed by a mechanic nor by an insurance adjuster to decide if it is a total loss count? By this is minor, and almost everything has error bars. )
Of course this is not an exact number, but rather an estimate based primarily on sales figures; thus, it has pretty large error bars attached to it (metaphorically speaking).
100 yes to this! In another thread many words have been spent discussing minutiae of subharmonics “generated” by CD players. A simple lab measurement would demonstrate their nonexistence. But why find out objective truth when one can idly speculate? /mild sarcasm.
One way I’ve found to make those kinds of numbers seem more real is kind of a toy thought experiment, fun to think through if never come across it before.
Let’s say you meet a very wealthy and eccentric person, they decide to give you a lot of money, which is great, but only $1 dollar at a time (eccentric, natch)
Let’s say they hand over $1 bill every second, and they do this continuously, without a break for eating or sleeping or explaining how they happened to have such a large supply of dollar bills
How long will it take for them to give you $1 million dollars? How much longer for 1 billion? 1 trillion?
Yup, or even just abstractly phrased as "about how long is a million/billion/trillion seconds?".
Now, the Fermi question that I cannot answer is: What fraction of the general population, and what fraction of our rulers, can answer this question (reasonably) correctly?
I remember the "Future Strategist" podcasts by James Miller with Gregory Cochran, back in early 2020, where Cochran claimed in his seemingly-overconfident fashion that the UK government didn't know anything and were making stupid decisions based on what they wanted to be true rather than what was true. Miller gently pushed back, putting his trust in markets and governmental advisors, but Cochran was all, like, "nope!".
I once enjoyed a book called "Physics for Future Presidents" that was written in this spirit albeit about a subject that has lost cachet as something to worry about.
I'm not sure whether knowing about RAM is all that important.
Let's limit it to the 5 most important scientific concepts for politicians-- that's probably as much as can be covered in a one semester course, and remember that this is for people who aren't naturally good at such things.
Maybe there can be another 5-topic course for technology.
The article mentions exponential growth, and I think it could reasonably be expanded to getting an understanding of s-curves.
Evolution is another crucial concept.
Probably include that science is a process of figuring things out. Some parts are well-settled, while others are more likely to change.
Nah, none of these will work. The most important concept, which is a foundation for all the others you listed, is that there a). exists an external reality, b). which cannot be changed by mere words, and c). science is a technique of using numbers to describe this reality in a very precise way that simply cannot be achieved with words.
I know this might sound basic, but most people in general (and especially not politicians) have not internalized these concepts. The boiling temperature of water at sea level is not just some social convention or a captivating story or a talking point or a popular turn of phrase; rather, it is the outcome of a sophisticated model that is tied into many other models, and the model works so well that we call it "true". Water really does exist, and it really boils at 100 deg C, and no amount of speechifying will change that. You could take all your thermometers and throw them away or relabel them, and water would still boil at the same temperature; you just wouldn't be able to measure it anymore.
This is a really powerful idea that sounds deceptively simple, but is actually very difficult to fully comprehend -- otherwise, we'd have no need for the scientific method.
The problem isn't that politicians don't understand the idea of a physical reality. Everyone understands that, no matter how thick (e.g. ask them whether they'd put their hand on a hot stove or go without eating for weeks or whatever).
The problem is that in some cases, social dynamics are more important than physics, and politicians are embedded in those spaces. The laws of physics don't win you votes, being popular does. But it's not limited to politics either. The real hubris of Rationalists is assuming that people don't matter.
I agree. But how difficult to understand depends a lot on the context: in your example, it's quite easy to accept that water boil at 100°C, except if you gain something from water boiling at less or higher temperature, or that it depends on the moon phase.
I would add a bb) The external reality must be assessed by direct or indirect measures, never by how good or bad it will affect you (or anybody). This, imho, is the harder thing because of the natural tendency of using scientific description of reality as ammunition for advancing your particular cause or interests. Anyone is pro-science, but when reality does not advance (or worse, weaken) your cause, not so much anymore....even scientists :-)
I agree that RAM itself is not important -- it's more a sort of marker. About the science course. Well, I think the 3rd one is way too general. Anyone with an intro-course grasp of any branch of science will have run into compelling evidence of it, and will get the point. I think I would vote for giving politicians well-taught courses in 5 things they are likely to be called upon to understand in the present day, rather than a general grasp of big concepts.
-how epidemics work
-how hatred of other groups works, and what promotes development of alliances & tolerance
-how the economy & financial systems work
-effects of tech on human lives -- good and bad
-what constitutes progress -- there are different views
Seems like you could do a decent job with each of these with a few articles of about the length and difficulty of articles in the Atlantic or New Yorker, each followed by a discussion led by instructor. Those who wanted to learn more could be given a list of good sources.
Edit: I get that these are not exactly science courses. But you could teach each course in a way that brings in a lot of science. Even the question of what consitututes progress could include lots of data using various measures of progress: Percent of population below a certain standard of living, fraction of world population engaged war, frequency of suicide, happiness polls
How epidemics works seems like a very advanced subject, given that experts on the topic did quite a poor job at predictions.
More than politicians sucking at mid-level math (hardly surprising given the popularity of "I hated math in college" during any celebrity interview), it's the incredible confidence and ego-boost you got during early covid when mastering exponentials and ODE with 2 variables. Scientist and engineers were all over Youtube with vulgarisations about exponential growth (It double every 5 days! how long before all people on earth are contaminated? let's talk about the wonderful logarithms!) or simple epidemic modeling using compartment ODE like SIR (to show that you were a true genius, above even the already impressive exponential masters).
I must confess I got a moderate ego boost cause I can do all that and easily follow all those new youtube stars (the contrary would be problematic earning a living in Computer Aided Engineering, although sometimes you can be surprised at how well people master things absolutely mandatory for their job), something I am not so proud now after I saw how the covid crisis was dealed with by those "experts":
Those super simple models were never questioned, validated, improved, or discarded as unfit once they failed to provide prediction accurate enough for defining policies. And fail they did. But they were used to justify ridiculous measures that fly against even a modicum of common sense (wear a mask when hiking in the forest, you terrorist punk!)
So before throwing stones at Johnson for not understanding what exponential growth is (or fractions from what I know), maybe there is serious work to do in the garden of those who understand exps and logs.
PS: not that I am against throwing stones at Johnson in general, but let's do it for more problematic issues than being bad at math or maybe plain stupid, like not following the social distancing measures he himself did mandate for example....
Agreed on the importance of each of the topics you've picked.
A bit skeptical about
"Seems like you could do a decent job with each of these with a few articles of about the length and difficulty of articles in the Atlantic or New Yorker, each followed by a discussion led by instructor"
E.g. I don't remember whether the intro economics course that I took as an undergrad was a half year or a full year, but I'm reasonably sure that it was perhaps a factor of 2 or 3 longer than you suggest.
I agree. But politicians are busy and impatient, some are not well-educated at all, and some are not very bright. Was trying to think of something minimalist that would work for people have have not sat down and read something a bit novel and challenging in quite a long time, if ever. Teaching the equivalent of 5 New Yorker articles on a given subject would constitute a gigantic improvement in their grasp of issues related to subject. There could be an optional phase 2 where politicians can earn certification in each of these subjects, and courses for certification courses could be a semester or even a year long, and involve homework and final papers. What u think of that?
This week I have been thinking about finding a voice. I have experimented with writing in a few different styles this last year and what I have seen is that my writing style always reverts to something that sounds like a generic magazine article, and is quite plain. This is not all bad, as I can now produce a lot more words of above average quality on demand, but it feels to me that the next step in my writing would be to focus more on the execution and the details, like word choice and sentence structure, or whatever else will help me express myself more in my own voice
I read something recently that seems relevant; something like, “you’ve found your voice when you know which criticisms you can live with.” Like, as soon as you break out of bland genericism, you’re laying yourself open to some sort of criticism. And it’s the paralysing thought of this criticism that keeps you on the generic straight and narrow. And you know you’ve found your voice when instead of thinking “oh god, is this too flowery/plain/whatever?” you’re like, “some might think that, others won’t, it’s my voice, so be it.” (Obviously there’s a balance here, sometimes your inner critic is right, etc.)
Change your literary diet. Read way less magazine and blogs that sound like magazine articles, and binge on prose masters. Here's a list of people whose essays absolutely delighted me: Virginia Woolf in *The Common Reader*. George Orwell. Tom Wolfe in *The Kandy-Colored Tangerine-Flake Streamline Baby.*. Dwight MacDonald in book *Against the American Grain.* Edmund Wilson. Gore Vidal. Daniel Dennett. Oscar Wilde, *De Profundis.*. Ruskin. Alexander Pope, essay called *Peri Bathos*. (18th century, but very readable. This essay made me laugh so hard I cried.). On Substack, I think Sam Kriss writes very well.
Try out a bunch of them, and then keep going and read more of the ones whose prose you especially enjoy. If you absorb some style from several you'll be in less danger of becoming a simple imitator of one.
To loosen up, try writing when stoned or drunk. Or record your thoughts on audio, then turn them into prose without fully cleaning them up.
Later addition: Was ruminating about your topic, and another idea came to mind. Start off your articles in ways that magazine articles never start, as many different ways as you can think of. That will help you start off without having a magazine article mental set. So you can start off with
This is my biggest secret.
Shut the fuck up and hear me out.
You are about to experience my deepest and craziest thoughts.
Yeah, you're smart, but so am I.
You are one of the few people who can fully grasp what I'm about to tell here.
NOW HEAR THIS:
Don't worry for now whether you can use such beginnings in actual articles.
Oh I love that idea! Definitely could spice my literary diet a little bit. I don't know why but I just assumed that no good essays were written before the Internet came along. Thank you for this!
This reminds me of Étienne Fortier-Dubois's idea of human writers priming themselves with AI-style prompts. ( https://etiennefd.substack.com/p/prompt-engineering-for-humans ) I've never tried this – my problems are more fundamental – but it's an interesting tactic.
I have something of an anti-ear for prose. To continue playing with the idea of prompts, all the magic sauce that goes into the positive prompt to get the universe to spit out a Vonnegut or an Updike or a Leonard goes into the negative prompt that gets you me. At comparable weights, I'll proudly add.
Consider the two sentences in my reply to Eremolalos. That first 'this' ought to be a 'that', it would have been trivial to avoid the close repetition of 'of', and those parenthetical en-dashes, so awkward in such a short declaration, could be dispensed with by means of a simple 'because'. I am Nabokov's Ilya Borisovich Tal made flesh.
But this is a thread about you, not me. Of the three latest essays on your Substack, the one on shrooms has great style, the endorsement of Leahy is not far behind, but in the most recent one on the Twitter rep system you lapse into a more mechanical explicatory mode with more Slavicisms. (I don't speak Bulgarian, but I do speak Russian, and I think I recognise the temptation to write e.g. 'people who are to your liking' because it's closer to которые вам по душе than the more idiomatic 'people you like' – that 'who' is pretty strongly felt to be necessary.) It could have used a little more metaphor, a little more colour, a few more rhetorical questions of the kind that made the other two swing.
Just one rando's opinion, of course, but meant constructively, and perhaps of some value as you tune the voice.
Wow, that was such a great reply! And you actually took the time to read through some of my writing, I'm blown away! Thank you for this! The Twitter piece is the one which I spent the least time thinking about, and clearly it shows. Thanks again for the feedback
Yeah, I've been thinking about that as well. I guess it's gonna come down to practice in the end, so I am just trying to keep on writing the same amount as before but experiment more with the voice
I see more information about JFK's assassination has come to light with the recent publication of a book called "The Secret Witness" by 88 year old Paul Landis, who at the time of the incident was a Secret Service agent in the car behind the President's.
That article reminded me of a analogous incident much further back in time, about which I recently learned more when reading a book called "William Rufus" by Frank Barlow, published by Yale UP (2000).
He sounds like a fascinating character, and in some ways quite modern in outlook. But so repellent did his attitudes and behavior seem to contempory historians, who were mostly clerics, and many since, that he's had a "bad press".
Barlow devotes a chapter to the event of his reign, as William II (1087-1100), best remembered today: His mysterious assassination in the New Forest, in the year 1100. In relation to this, he includes several facts recorded at the time, but makes no attempt to identify a culprit from among the many suspects.
As luck would have it, I found an ebook copy of his book on a Russian website a month or two ago. (Those naughty copyright-violating Russians have literally tens of millions of ebooks stashed away, many bang up to date, if you know where to look!)
When I reread the book, the available facts pointed to a clear prime suspect for the killing. My conclusion, for reasons briefly summarised below (if anyone cares much), is that there weren't two assassins, as possibly in JFK's case, or even one. I believe the most likely truth, based on all the facts that can be known today, is that the silly sod accidently killed himself!
The first fact is that he was killed while out hunting. Now there were various kinds of hunting, and on that day it was not the kind of hectic style, with packs of hounds, and riders charging about blowing horns. It was a stealth mode deer hunt, in which the participants spaced themselves individually widely apart throughout a forest area and waited for deer to gallop past, which they would try and bag with arrow shots.
Normally on a hunting day the participants would head off, keen as mustard, literally at the crack of dawn. Apart from anything else, it might be several miles from their overnight lodging to the hunting ground. But we are told that on the morning in question they didn't start until after midday. One historian claimed this was because the King had drunk more than usual the night before and had a hangover.
Another chronicler mentioned in passing that a blacksmith arrived at around midday and delivered six arrows, of which the king kept four and gave two to a sidekick called Walter Tyrrell. Although apparently a trivial aside, hardly worthy of mention, this fact may be a key to the mystery!
So in summary, at the start of the fatal day we have a king who may be a bit woozy from the night before, and thus not fit to operate heavy machinery, or any machinery, including new-fangled cross-bows.
Perhaps it was not a hangover which delayed the start of the hunt. Maybe the chronicler assumed that was the reason for the delay . To my mind, another obvious possibility was that they were waiting for something. From the facts recorded, that was most likely the arrows which the blacksmith was due to deliver.
Now imagine a Texan billionaire inviting his rich pals on a hunting trip. On the day, they all have to wait for a gunsmith who eventually turns up with a mere six bullets. Sounds ridiculous doesn't it? They would have crate loads of ammo, and it would have been the same with arrows on that fateful morning.
They must have had ample supplies of arrows, with normal heads, and (broad bladed) hunting heads. So the arrows the blacksmith delivered must have been very special, and I suspect they were crossbow bolts. According to Wikipedia, crossbows were only reintroduced into Europe at around this time. So they were cutting-edge technology, doubtless with various models regularly appearing, as with any new technology.
Standing nearest to the King during the hunt was Walter Tyrrell, some hundred yards away. No participant would have had much of a clear view of any others, especially as they would all have been trying to look inconspicuous so the deer would not be deterred. For the same reason, they wouldn't have wanted any attendants or servants nearby.
Immediately after the killing, Tyrrell hoofed it to France, in the not unreasonable belief that he would be blamed. But for the rest of his days, including on his deathbed, he swore by the blood of Christ that he was not responsible for the fatal shot, and most people took their religious oaths very seriously back then, especially when they were about to meet their maker!
So in summary, I believe the king was fumbling to load or reload his crossbow, and possibly turned it upside down, so he could push the bowstring down with his foot (if it was an early model without a windlass, so he would have had to pull the bow string back by hand). Then he spotted a deer, and in the heat of the moment, he nocked a bolt while the bow was still propped on the ground pointing up at him, and the rest is history ..
Reality is not a mystery novel where you're carefully presented with a minimal set of facts designed to lead to the correct conclusion. It's not necessarily the case that the facts known are relevant or even correct, or that the mystery is actually solvable.
My favorite JFK assassination theory is similar. It's goes that the Secret Service killed JFK, but it was an accidental discharge that they then covered up. I wonder how many of the great mysterious murders of history were truly accidents that no one believed.
Sounds a bit of a long shot (literally!), but who knows? One thing for sure is that after an "official" accident which there is any chance of concealing, reputation management goes into overdrive.
In the William Rufus case, for example, if the arrow in his chest was obviously a cross-bow bolt, and Tyrrell was believed and thus ruled out, so an accident along the lines I sketched was suspected, then to preserve the royal family's dignity the official story may have been similar to that related by William of Malmesbury (see ZumBeispiel's reply below).
Luckily there happened to be a known local loony hiding in a nearby building with a sniper rifle so they were able to blame the whole thing on him. Lucky!
Nah; they put him up there to cover for the accident they knew was waiting to happen. The second shooter just happened to be in the right place at the right time-total coincidence.
The 5 dimensional chess move was to conceal the accident in order to create a cottage industry of conspiracy theories in order to stimulate the economy. :-)
[lizardmen involvement as an extra cost gourmet exclusive option]
Does this cover the predictions and conspiracy theories encoded in the cave painting in El Castillo? https://www.oldest.org/culture/archaeological-sites/ :-) ( Can a plot incubate for 40,800 years before hatching? :-) )
Bit of both. Or maybe the most narratively satisfying is the better way of saying it. So much ink has been spilled trying to find a second gunman, and pulling together a grand conspiracy to explain why it all happened. It would be ironic if in the end it all boiled down to bad trigger discipline and desperate ass covering.
I mean, yeah? By a pretty large margin. I don’t know what to believe in terms of conspiracies, but when the leader of the free world gets his head blown off in such an event, I think it’s safe to say *somebody* wanted him dead.
By sheerest coincidence I just finished reading "Day of the Arrow," a horror-thriller set in a rural French district where, even into the last half of the twentieth century, the peasants (who are secretly in the grip of a Mithran cult) ritually murder the local nobleman when the crops fail. Traditionally, this is done under the guise of the murder being a "hunting accident." One of the conspirators (who are planning to do away with the current marquis) tells the protagonist that this is the real story behind the death of William Rufus.
Heh, it was that movie that got me to read the book. I saw it was on TCM On Demand a few weeks ago, and it looked interesting. But after finding out it was based on a book, I decided to read the book before watching the movie. But then the movie left TCM before I finished the book!
The film seems to have a middling reputation, and judging by the plot synopsis it has been much simplified. Apparently in the movie it's the wife who is actively trying to save her husband? In the book, it's an old friend of the marquis who is the active protagonist -- and whose motives are, um, complicated by the fact that he is also in love with the marquis's wife.
The book, at least, does a quite a good job of conveying an air of sinister but sun-lit malice. Everything is bright and out in the open, and the "creep" is conducted at a slow burn. (Maybe too much so: rural horror is now so familiar that the conspiracy is visible twenty kilometers off to the initiated, and so the protagonist can seem pretty dim for not putting the clues together.) I'd recommend it to lovers of the James boys: M. R. and Henry.
Very interesting! It's fun to compare the Wikipedia articles about William II. in different languages. Dutch Wikipedia blames it on Henry I., is successor. Most others blame Walter Tirel. Only the German Wikipedia proposes it was an accident and cites William of Malmesbury as follows:
The day before the king died, he dreamed of being in heaven. He suddenly woke up. He ordered that light be brought and forbade his servants to leave him. The next day they went into the woods... He was accompanied by a few people... Walter Tirel stayed with him while the others pursued the prey. The sun was already setting when the king, cocking his bow and letting an arrow fly, slightly wounded a deer that jumped past him... the deer ran on... the king pursued him for a long time, raising his hands around his eyes from the sun's rays to shade. At that moment, Walter decided to shoot another animal. Oh, good God! The arrow pierced the king's chest.
When he was hit, the king did not say a word, but broke the shaft of the arrow where it protruded from the body... This hastened his death. Walter immediately came running, but when he saw him unconscious, he jumped on his horse and quickly fled.
Hmm, yes his younger brother Henry (who succeeded him as Henry I) was among the hunting party, and he or someone to benefit by his succession are certainly among the suspects.
But I would say his elder brother Robert was a stronger one (or, again, one of his adherents, with or without Robert's knowledge). He had been Duke of Normandy, but had "pawned" it to William to obtain funds for his participation in the First Crusade. He was due back in a week's time, having doubtless spent most of the money on the adventure. So he would have been keen to see William eliminated, to make it more likely he could regain possession of Normandy without having to repay the loan.
The trouble with the contemporary historical accounts is that some are contradictory in certain aspects, and other authors literally made up things or copied them from each other or from earlier accounts of similar occurrences. Often they were more interested in making moral points than relating the facts.
26 year old female literature and history teacher who enjoys Bach and Mahler looking for an older male partner who also enjoys classical music, wants marriage and children in a few years, and is oriented towards technical understanding and good taste
Pokemon Gen 2 was based on the Kansai region of Japan, which among other things, contains Nara, a city famous for its deer parks, where deer roam around the city and visitors buy special crackers to feed them. Pokemon Gen 2 also introduced Stantler, the first deer pokemon. Unfortunately, it's just a random wild pokemon on Route 36/37. They really missed an opportunity to have a fictional analog of Nara there. They could have made it like the Safari Zone where you feed crackers to Stantler in order to catch them.
I mean, there's the National Park where you catch bug pokemon and the catching contest? It seems kind of similar, and that it's about nature and there's like a special things you buy to interact with the animals there in the park
I have recently had some work experiences that give me some insight into one of the potential reasons why so much of modern architecture is so ugly: because the way we build buildings these days requires architects to precisely specify minute details of every aspect of the building on computer generated 2D architectural drawings.
I work for an architectural lighting company, and recently I have been asked to start making production drawings for our orders. Sometimes the customer is fairly clear and this is easy, but not always. Today I finished up a project that took 20-30 hours, where I had to try to interpret the architectural drawings for a building to get the details of what the customer needed to order, and how we needed to build the lights to satisfy their need. There was such a staggering amount of data on these drawings. I was extremely fortunate to have had a copy of the drawings which the architect had helpfully highlighted all of the locations I needed to scrutinize, and only received the pages relevant to my needs, though I could tell from the page numbering and table of contents that there were over a hundred pages in the whole document. This is a staggering amount of work to produce, and frankly not a particularly great way to convey all the information necessary to build this building.
For instance, some of the most difficult lights to interpret were the ones on the stairs. This building had a set of stairs with a super common arrangement, where you go up half a flight, turn 180 degrees, go up the rest of the flight, then repeat. They wanted linear lights on the underside of the stairs. But despite having multiple views of each set of stairs, the only way I was able to figure out that they actually wanted u-shaped runs on the bottom of each set of stairs was the hand-made drawings somebody at some point higher in the process than me had produced at some point, and which were included in the information packet I was given. These kinds of stairs are simple and common, and yet with what is essentially a square spiral shape, not that easily depicted in a series of 2D drawings. Especially when those 2D drawings include not only the lights I am trying to specify, but also all the structural elements and trim and flashing and every tiny little detail. It is just so much to go through.
Hundreds of years ago, when a team of builders built a cathedral, there is no way they would have specified all the minute details like this, especially for something light lighting. They would build the structure of the building, with some plans for how different parts of the structure would be illuminated. Then, when it was time to finish the interior, the aristocrat in charge of the project would walk through the space with a head craftsman and discuss the broad goals of how the illumination sources would be arranged, and then the head craftsman would work with a team of skilled artisans to build and install the lamps and other fixtures in situ. The important point being that the small little details would be left up to the skilled artisans responsible for the labor of manufacturing and installing the fixtures.
My company COULD do things this way too, if the world was set up to operate this way. Our products are highly customizable and not terribly complicated. We could have sent out a team of a few skilled artisans in a truck with a nice portable mitering saw and a pile of the materials we build our fixtures from, and they could have built everything on site exactly to fit the space, with only vague direction from the architect about what needs to go where, and what kind of style and illumination they want in each location. I think this would be cheaper and take much less time overall then the way we currently do it, where we spend many hours of time with customer service and reps and everyone going back and forth again and again on exactly what is needed. Our products aren’t difficult to assemble, and don’t require heavy machinery. They could be assembled in the field. And this would mean we wouldn’t need to spend a long amount of time carefully packing them for shipping, which is difficult and expensive given that our standard size fixture is 8’ long.
But this isn’t what the customer thinks they want. They want a highly customizable pre-made product that they can slip into place at with unskilled laborers. We have a ton of problems with our products being installed incorrectly, which just adds more time and back-and-forth, and shipping broken products back-and-forth for repairs and adjustments and replacements. And it requires we build our products robustly enough to be installed by laborers who we know will damage them, and robust enough to be shipped without breaking. It all feels incredibly wasteful and unnecessary to me. But this is how builders and architects expect things to work.
And 2D drawings… really? Can’t you just give me a 3D model of the building? No, of course not, that would violate somebody’s intellectual property. That or the architectural drawing software the architect uses won’t give us a license to a reader for those files.
Point is: the way we build buildings these days is with the expectation that every single minute little detail is fully specified in drawings before construction begins. This requires a tremendous amount of effort to plan out, and generates a tremendous amount of data that is difficult to efficiently convey. And of course, standard features are much easier to draw/design with architectural software than some complex, novel artistic concept. And so architects and designers feel this pressure to keep repeating the same patterns that are easy to draw again and again, which is why so much of modern architecture is boring, ugly, and similar.
Nice. I really enjoyed this. I wrote a piece for Planetizen where I highlight the role of having to describe things in words. I think it complements your point about having to describe things in diagrams. Both make it difficult to rely on tacit knowledge, and a lot of what is beautiful depends on tacit knowledge.
The auto-generated roof in most CAD software defaults to each wall getting a slope. It takes an extra 10-15 clicks in Revit to convert the default four-sided roof into a gable roof. It's too easy to draw a crazy floorplan and then just let the computer calculate all the weird rafter angles for you. If you had to draw all that by hand, you'd be much more inclined to keep the outer walls to a simple rectangle or L shape.
Ooh finally a comment that touches on an area of my expertise. I work in Engineering Consulting and am one of the people responsible for a subset of the drawings you are probably looking at. I'm an electrical engineer and I specialize in lighting, as a matter of fact. Construction Drawings are the way you've described, in my opinion, because of what we in the business call CYA, "Cover Your Ass". In summary, the structure of design contracts incentivizes over-specification from all of the design teams because if during construction something needs to be changed due to a miscommunication or omission on the drawings the cost of those changes will be borne not by the contractor or the owner but by the designer. Design firms stay afloat by completing a significant volume of work, completing it quickly, and avoiding change orders. There's also the matter that construction permits are issued based on drawings before work begins and the authority having jurisdiction has a legal responsibility to verify the project is designed to be code compliant so there's another reason drawings need all this information on them. I think that standardized drafting definitely has some influence on how any building project ends up looking but I think it's a pretty weak influence and I don't think it's actually the reason for current style since that style predates the advent of the mass adoption of CAD. In fact, if you think about it CAD ought to make it easier to produce more adornment! Drawings details and specifications can now be easily shared between manufacturers, vendors, and designers and can be easily reproduced instead of needing to be redrawn by hand. It's not what's easy to draw that creates the style.
As for 3D models, a lot of 2D elevation and plan drawings are generated from a 3D model! Autodesk Revit is the industry standard. All of the building systems are laid out in 3 dimensions first and then exported into 2D drawings for contractors/ record keeping. In my experience, and since the vast majority of deliverables remain 2D drawings, these models tend to only be shared internally during design and the level of sophistication in the models, though it varies, is usually low.
Anyway, if you want architects to do more adornment or complex designs there's something you specifically can do! Encourage your company to produce and share detail drawings of the kind of lighting applications you'd hope to see so that's it's easier for architects to put them on projects. The stair example you've shared is a great case for that because like you said it's a very standard stair pattern. This might not lead to more neoclassical buildings but it could lead to more interesting and beautiful spaces unique to our own time.
Lastly, my take on this artisanal approach your describing is that it's something that couldn't really ever be recreated at scale now as the contemporary social, economic, and legal reality of construction is extremely caustic to it.
Good points, thanks for the input. I see a lot of change orders happening between the designer and my company. This particular project has been going back and fourth (fortunately not that many times) for almost a year. Other projects I have seen have 5+ revision cycles. I think a lot of this could be solved by making it on site once the installation environment is largely completed, like I described, though of course I understand there are lots of practical reasons why that can't work in today's culture and climate.
CYA is certainly an important and unavoidable factor that influences our designs and products as well.
I would like to be able to get access to the CAD models for a couple reasons. First, it is easier for me to interpret, even if they are very complicated already. I make my own drawings from within CAD and do a lot of design work as well, so I have a lot of experience with it. But more importantly, being able to directly measure design features out of a live rather than a PDF would really help clarify certain things (though this could be done with 2D drawings as well). For example, with these U-shaped runs under the stairs, I don’t really know whether the dimension between the legs of the U is center-center, inside-inside, or outside-outside. I can imagine different people in different situations using any of the three. And it does matter if we are cutting components to fit within an 1/8” tolerance, as is our standard, given that our lights are several inches wide. Did whoever was drawing this properly compensate for the width of the fixture itself. It makes me particularly nervous for some of our custom bent/curved pieces, where the radius the drawing provides might be to the inside or outside of the fixture body depending on how it is mounted and whether or not it is recessed. I usually just take my best guess and go with it. Though then again, maybe it is still to worry about an error of a few inches in a curve with a radius of 50+ feet… the fixture if flexible enough to accommodate that amount of error.
I totally agree with you that it would be great if the company could release lots of models. I am a big fan of open-sourcing stuff in general. Good luck though – my boss is very paranoid and secretive.
On a related note, I have some pretty neat optical design capabilities and some ideas for how to build better LED light engines. The company I currently work for has a lot of heart, but is ultimately too small to really take advantage of what I have to offer, and can’t afford to do the R&D required to develop my designs. So I am looking to find a new employer with more resources, and even half-seriously considering trying to launch a startup. Is that something you would be interested in talking about? Anybody at a large lighting manufacturer (Acuity, Phillips, etc) you could potentially refer me to?
Definitely I'd be interested in talking more about light engines and fixtures! I have some contacts in the industry but nothing at any of the big manufacturers like Acuity, yet. I recently moved to NYC and I'm hoping I can parley that into more networking opportunities specifically within lighting. Sorry for the delayed response to your comment, the holiday interrupts everything, but if you see this response I'd be happy to keep chatting with you about this stuff
I remember reading about a way that modern building where specifically weak to after-the-fact customization : office buildings floors are made with a thinner concrete layer that only holds because it is under tension but will break if someone tries to drill a new hole in it (similar to safety glass).
Do you have other examples of practices that make later modifications of a building harder ?
Slight correction – you’re probably talking about post-tensioning here. Very simplified explanation: concrete is terribly weak in tension but fairly strong in compression, and when a bending moment is shown in the middle of a member the top half is in compression (good) and the bottom half is in tension (bad).
Traditionally, the bottom tension forces are taken purely by rebar, which means that the bottom half of the concrete is doing very little (thermal mass etc. is still useful), but if you run steel cables through the beam then tighten them after the concrete has cured, much more of the concrete is in compression and you can get away with longer spans and/or thinner slabs.
No issues with drilling through the slab as long as you avoid the post-tension cables, so biggest impact is that a hole may not go exactly where you wantit. Following website recommends not drilling into PT slab, but even if you get some idiot who starts drilling without checking for cable locations, you’re not going to damage the cables with standard concrete cutting tools (although if they’re an idiot, they may also start drilling with diamond drill bits so don’t hire an idiot).
I mean, one clear example that people hundreds of years ago didn't have to deal with is routing wires. Much much easier to install all conduit for wiring before the walls are completed. That alone is a good enough reason why my proposal for modern day artisinal lighting wouldn't really work out. People didn't need to install conduit for candles / lanterns hundreds of years ago.
Sounds like a good case for a VR app, where the user (in this case you) could walk or float round a virtual image of a building, using a joystick, and highlight and adjust various aspects which would be invisible in real life, such as temporarily making the walls almost invisible so you could see the wiring and pipework, etc. You could even be joined by the architect, as in a multi-player game, and collaboratively clarify things such as this stairway lighting. It's a damned sight cheaper to make adjustments with just electrons than with real materials!
...many of which are not necessarily historically *accurate*, because reconstruction is more like "we were able to identify presence of these color pigments here and there, so we paint it by numbers" and less like "a masterpiece paintwork, similarly lifelike as the sculpture beneath created by equally well-skilled artisan sculptors and painters"
I think it's fairly obvious they would. The reconstructions are garish and awful.
Has anyone tried to present a project showing what a famous "white" statue would look like if it was painted in lifelike colors by an actually skilled painter?
Regarding your language learning proposal from a while back, I think English -> Japanese is one of the *worst* examples you could have chosen. You could kind of do what you propose for closely related languages like Dutch or German where the word patterns closely match English (though even then, good luck explaining gender), but English and Japanese are just so utterly different that Mad Libs study makes no sense.
I would say the opposite. I tried to do it with a French sentence, but it was boring because the grammar matches so closely that most of the steps where identical. The proposed learning method is designed to help you learn a weird grammar by exposing your brain to different word orders.
But grammar is about WAY more than just different word orders. Again, with French or German, you can more or less do that (if you ignore gender and other inconvenient details). But Japanese grammar isn't just a permutation of English. All of the *concepts* and *building blocks* are completely different. You can't just map 1:1 between them mad libs style.
As I understand it, Scott wasn't claiming that the grammar is just a permutation. He was looking for a way for adults to "learn by osmosis", the way children learn when they have no other choice, but which is hard for adults because we keep wanting to use the language tools that we already have. The idea, as I see it, is that we'd eventually start to feel that it was "natural" for certain things to be phrased in certain ways in Japanese. And this goes from things like "these two grammatical formations appear the same in English but are different in Japanese" (and vice versa) to "these two concepts use the same word in English but are different in Japanese" (and vice versa). And eventually we'd fine-tune into all the little nuances that are hard to fit into grammar books and dictionaries.
It's not pattern matching or belonging to the same family. You'd generally want a language without much inflection, one that's highly analytic. English is a relatively analytic language (one of the most analytic in the Indo-European family) but most East Asian and Southeast Asian languages are analytic. As are many in West Africa. The fact that Korean and Japanese are highly synthetic is one reason some people argue they're related to Turkic or Mongolic languages (which are also highly inflected).
This is one thing that frustrated me about non-PIE languages. I was used to languages being more synthetic than English but many are more analytic. And the philology is often not all that advanced and most philologists deal with highly synthetic languages meaning a lot of the tools are inapplicable. Quite annoying.
Anyone have advice about reverse mortgages? We're going to outlive our retirement funds if we don't do something and that looks like an option. We have substantial equity in our home in a high cost of living area, don't want to move, and don't need to worry about leaving any inheritance.
Hey, someone pointed me here to give my take. So reverse mortgages are generally bad deals. While it depends on the exact terms you're generally selling your home equity at about a 50% discount or less. For example, if you're 60 years old and live to be 80 and have a house worth $500,000 then at current rates you'll get about $1,000 a month. Over the 20 years that's $240,000 after which they get the house, sell it, and make a cool $260,000. And this is without taking into account discount rates. You'd have to live well past 100 for it to make sense for you.
If you want to access your home equity then getting a cash out second mortgage at basic mortgage rates (8%) or a HELOC (10%) which allow you to access about 80% of your home equity. The advantage of a HELOC is that you don't have to take out the entire amount, you can charge things to it like a credit card. You can then take the same $1,000 a month out if you have a little discipline. In the same scenario as above what would happen is that at your death the house would get sold, the bank would take the $240,000 you owe (plus any interest), and the rest would go to your heirs.
Of course, the danger there is that you can run out of money while a reverse mortgage lasts for as long as you remain in the house. But keep in mind it is as long as you remain in the house. You lose that equity if you move or get put in a nursing home or even get hospitalized for long enough.
Also, since we're generally dealing with relatively small monthly incomes, it's pretty easy to get a similar amount from a sidehustle. Which would be my first recommendation. If you really just need an extra $1,000 a month and you have free time (as most retirees do) it's not too hard to find something. Often from home and low stress.
I've been doing some reading on retirement planning lately. Most recently the Retirement Planning Guidebook by Wade Pfau. Like one of the other people who've replied, I'd also heard bad things about reverse mortgages, but the book has me reconsidering. It sounds like they were cleaned up a lot since they first became available. I believe he also has a separate book completely dedicated to reverse mortgages which, presumably, has more details.
I don't know about US law, but in Europe at least there's also the option of selling the "naked property rights" while retaining a lifelong right to use it.
Yeah, at least you'd want to sell to a major US or EU bank, not to some random individual who might have mafia connections. Financial institutions can have a bad reputation but they're also risk averse enough to avoid killing people for a few thousand K$.
What an absolutely wild ride this continues to be... almost any option seems to still be on the table, including OpenAI's board stepping down and their successors reinstating Altman (good, but fraught without a clear account of the motivations and supposed reasons for his firing), the acquisition or acqui-hiring of OpenAI by Microsoft (bad short-term, ok-to-good-medium-term, probably bad long-term), or the OpenAI board staying the course and hoping the company isn't a ghost-town by next week under Emmett Shear as CEO (worst, no knock on Shear, but this is the bad-end outcome that likely results in bargain-bin acquisition by Microsoft with serious losses of employee retention and major interruptions in development and service).
These options and more are largely all in play, and Microsoft wins big in almost any scenario. This may be why Microsoft CEO Satya Nadella has been so magnanimous in keeping these many paths open, characterizing Microsoft's hiring of Sam Altman and Greg Brockman as a "holding action", and committing to continued support and partnership with OpenAI regardless of how things shake out. Microsoft currently has the ability to essentially end OpenAI as a solvent company right now, but Nadella has (imho) shown a great deal of leadership and pro-cooperation tendencies when the chips are down, at the present juncture... maybe in part because there are very few paths where Microsoft doesn't come out on top in this shake-up.
I reject any interpretation that some of the decision-makers here were "playing 5-D chess" or planned for any of this... there's simply too much variability in highly stochastic systems (such as human choice). Rather, the arc of this entire story has been characterized by extremely reactive decisions where the likely consequences weren't thought out or well-considered, with the outcomes of those decisions spiraling quickly into chaotic no-win scenarios. As usual, the winners here aren't those who had some kind of "grand master plan", or even expected the players to respond rationally according to their incentives and self-interest... but those who could respond quickly, effectively, and cooperatively to events where decision-makers acted in irrational and self-damaging ways, while also leaving opportunities open for "saving face" and not rubbing salt into the wounds of any perceived vulnerability.
I applaud Satya Nadella, Sam Altman, and Greg Brockman... this has been a master-class in damage control and applied game theory, in many ways... as well as Ilya Sutskever for admitting when he was wrong, taking accountability for his choices, and course-correcting. None of those things are easy or natural, and it speaks to the professionality of all involved that Altman and Brockman responded very positively to his contrition in the face of what must have felt like a massive betrayal by Sutskever.
I await any further developments just as everyone else is.
Well, Sam Alman is back in as CEO of OpenAI, with the majority of the board of directors stepping down. This is probably the best outcome that could have been hoped for under the circumstances!
It is somewhat concerning to me that Adam D'Angelo, the CEO of Quora, retained his seat on the board under the new regime. Despite D'Angelo's vested interest in Poe, his AI chat company, it seems to me that Quora has a direct conflict of interest with OpenAI, unless it pivots its business model significantly.
Ilya Sutskever is also off the board now, though it sounds like he will remain as OpenAI's chief scientist... signs look good that Altman and Brockman aren't harboring any vindictive feelings toward him, and that he'll be welcomed back into the fold. Still, a misstep like this will rightfully be a setback for Sutskever, and likely means that he will not hold a governance position at OpenAI or at any other tech firm in the future. A displayed lack of loyalty is very difficult to get past for anyone, and a broken trust is hard to make whole again.
Your comment would be more interesting if you had spelled out why you seem to favour the opposite sorts of outcomes to most people in this and related spaces.
Yours would be a lot more interesting if it spelled out any basis or merit for your claim, instead of merely slinging accusations and being antagonistic for no reason?
I'm not sure I have the same opinion as Jack, but let me explicitly ask what I am curious about, and might be the same thing. I think this is one of the worst scenarios since the Board Members who cared about AI Safety, EA, and not killing everyone on the planet look like they would be removed. The best option seemed to be that Open AI stayed where it was as an AI non-profit that cared about AI safety and wanted to prevent it from killing us. Yes, Microsoft was likely to poach some of the talent, including Altman, but ideally enough would be left with Open AI to still do significant AI Safety research.
I am not the most informed AI person, so I am wondering where I might be making a mistake. Do you think my analysis is wrong and why do you think this was the Best Case Scenario?
I welcome the chance to talk about this a bit, and I feel more than a little regret that I flubbed it in this thread with Jack... when things seem really vague, I guess I default to assuming some amount of malice is involved. Whoops.
To be clear about my assumptions and priorities, I would say I'm very motivated by AI-safety, but not quite all the way in the "doomer"-camp so far... I think that the outcomes of AI development are going to depend a lot more on the actual design and the specific ground-up architecture of these systems, much moreso than the business policies, intentions, and governance of the companies working on AI today. I expect there to be *significant* changes in the industry in the next 10 years, and I don't expect true AGI or singularity-like events until 2040-ish (this week's news made me revise my estimate to 2045, but I'm updating back to ~2040 after Altman's reinstatement).
On the OpenAI front, the best outcome would have absolutely been the status-quo a week ago... a for-profit subsidiary governed by a non-profit board with members who take AI-safety very seriously. Now, it looks like that board is going to become far more like other Silicon Valley tech companies... motivated by iteration and disruption, quick dev-to-market delivery, and quarter-over-quarter ROI. This is very bad, actually! The reason I say that it's the best outcome, given the circumstances, is that almost every other path looked very likely to result in full acquisition or acqui-hiring by Microsoft... or (probably even worse) Alphabet or Meta. Remember that a significant portion of Microsoft's investment in OpenAI has been providing servers and GPU infrastructure... they could have easily pulled the plug or strong-armed OpenAI in this situation (there's nothing "safer" than an AI company that doesn't have access to any compute), but that's not at all what Nadella did. OpenAI maintaining independence as a company, and (for now) continuing the non-profit/capped profit subsidiary corporate structure with Altman at the helm is the closest thing to the status-quo that I thought could come out of this, and I do think it's a qualified win.
Other moving pieces here are that Musk famously bowed out/was forced out of OpenAI because he thought Altman wasn't prioritizing safety and transparency enough... so it is reasonable to assume that the same issues are sort of coming back for round 2, except that members of the outgoing board specifically said that their decision to fire Altman didn't have anything to do with AI safety (not that they offered any transparency on their *actual* reasons). Also, Anthropic broke with OpenAI for very similar AI-safety reasons... and Yudkowsky et al. have been doing excellent work at MIRI on AI-alignment for quite a few years (only for EY to essentially throw up his hands and declare the problem unsolvable with the current resources and timeline). If we manage to create human-level or superintelligent AI that doesn't kill us all as its first order of business, I don't believe it will be because we managed to create a mathematical proof of ethics to bind it with, or because the board of directors of OpenAI had the right composition of thoughtful, well-intentioned people guiding it through the end of 2023... I expect it will come after quite a few massive failures and successes in technology, societal adaptation, and systems integration between many of the advances that OpenAI has made (and will hopefully continue to make) and some others that are still very much on the horizon. When we get there, I expect the landscape to look very different from the map that we're using now... and I think the best chance that we have of getting a map that updates and responds to new developments quickly and anything resembling accurately is for several more iterations of AI technology to be very visible and undeniably apparent (instead of something that gets developed behind closed doors and ends up benefiting only those who are strategically invested in it). I don't believe the political will or public understanding will reach a point where we can marshal appropriate resources until it is very obvious how real the problem is, and how seriously it needs to be taken. Right now, Sam Altman seems to me to be the person best positioned to lead OpenAI in a direction that gets to that point with a reasonable balance of safety and practicality... he's been one of the primary people in the industry pushing for the democratization of AI, and while I find his safety strategy of "we'll keep moving forward until it becomes obvious we need to pause" more than a little concerning, I drastically prefer that to a timeline where it looks like nothing is happening for another 15 years, with opaque developments that are only used internally at large corporations... and then everything changes overnight.
It is not obvious to me that *time* is the only, or even primary resource for solving the AI-alignment problem. Nor is money, nor creativity, nor even intelligence, in a vacuum. It will take all of these resources and more... and I think a world where these technologies are available and accessible is one that is slightly more likely to be the one where our species survives. A longer runway would be fantastic, but it is not obvious whether significant progress can be made on the theoretical front in the time that is given to us... if there are reasons to believe otherwise, I'm all ears.
To be completely transparent myself, I am not the most informed person about many things, including AI... and much of what I'm saying here really is pure speculation. But that's how I see it... sorry for the essay, but let me know if you think I've made a mistake, or if I'm getting anything obvious wrong. I also absolutely reserve the right to change my opinion here if new information becomes available... it has been frustrating (I think for many of us) to watch this story unfold this week with crucial information like the actual reasons for Altman's firing withheld... and I'm operating in the dark just like everyone is.
I've read my comment back a few times, now, and I still have no idea what "accusation" I am supposed to have "slung". I was asking an (admittedly implicit) clarifying question, and politely enough, I thought.
I suppose the 'would be more interesting' framing is unnecessarily negative in tone. But is it really so rude to suggest that your comment was not *maximally* interesting?
In any case, that suggestion clearly has caused offence, so it seems prudent to move on without the clarification I was hoping for.
"You seem to favour the opposite sorts of outcomes to most people in this and related spaces"
This is akin to a "no true Scotsman" argument, and tacitly accuses out-group alignment.
Rather than saying, "hey explain to me why you're different from everybody else", especially when I'm not aware of a difference (nor of monolithic agreement here or elsewhere on many AI matters), try using specific details when you're asking your question. You might say, "Hey, I notice that you are being complementary of Nadella, but I myself have a lower opinion of him for these reasons..." Idk, I don't know what your actual perspective is, or what "outcomes" you believe "most people in this and related spaces" predict... mostly because you didn't state anything like this... you just somehow jumped to the conclusion that I disagree with other people here, without supporting evidence... which doesn't really give me anything I can respond to. Am I supposed to guess at the ways you think I differ? Will you let me know when I land on the one you had in mind?
Again, I promise you it was merely an attempt at clarification.
My comment specifically referred to "outcomes", so it should be clear that I was referring to the section of your comment where you discussed the desirability of various outcomes. Therein, your ranking of outcomes certainly does seem contrary to the prevailing thought here, insofar as you explicitly prefer what is perceived as the 'AI safety side' to back down/lose, and call the scenario where that faction sticks to their guns (and with the perceived safety-conscious replacement CEO) the "worst" outcome.
I didn't think I needed to justify the idea that "most people in this and related spaces" think differently about AI risk and the OpenAI drama, given that it's being called a 'disaster', and a significant increase in x-risk, by some of the most prominent voices here, and I haven't seen a huge amount of dissent on those points. In fact amusingly, elsewhere in this very thread I've been told that the space lacks diversity of thought on this issue.
So yeah, your sentiments appeared to differ from the norm, here- which I didn't think I needed to say *is not a bad thing*- and I thought it might make for "more interesting" discussion if we could clarify the reasons why. I no longer think there is any prospect of interesting discussion, so I will again attempt to move on.
Christopher Mims recently voiced some suspicions I've had recently regarding smartphones ('Social Media Is Warping Into Old-Fashioned Mass Media,' Wall St. Journal, November 18-19). Is the nearly obsessive use of smartphones healthy.
I don't own a mobile phone, and never have.
It's not that I'm particularly against it, although it has made driving a lot more dangerous. I just don't see the point. I've never played a video game, either. Again, the point?
Is there anyone else out there who hasn't turned into a cyborg?
OTOH I'm not sure if the "Okta dance" (or whatever equivalent method) actually essentially adds to the security, in comparison to the death-by-a-thousand-papercuts annoyance it creates when you have to fumble with your phone every time you want to do stuff (or can't do it if you accidentally left it at home or it's out of battery or malfunctioning some other way - it happens!)
Also, some restaurants, theme parks, etc. Life is becoming more and more designed around the assumption that everyone has a smartphone with internet access at all times.
Do not play video games, have not owned at TV for several decades, don't use phone while driving because I don't have a car, just a bike (dont use phone while biking either). On the other hand, I love playing with AI image generators, follow several blogs, and participate pretty energetically on ACX. So I'd say I'm only about half cyborg.
Well the point of videogames is to allow interactions with people without having to interact with, like, REAL people, who would want things from you. They're a massive victory for introverts.
I haven't had a chance to read all the comments on the Girard post, but I thought Scott was overly harsh on Girard, especially the last two chapters on political correctness (this was Girard's term). Scott writes: "So Girard is stuck in an awkward position of saying that the rise of concern-for-victims was good when Christianity is doing it, is bad now, and not having any good theory of what changed, or how this relates to the more speculative anthropology." I take Girard to be arguing that what went wrong is that contemporary western culture took the concern for victims from Christianity but then threw away the rest of the moral framework in which it was embedded. That moral framework includes, for example, exhortations to love your enemies and forgive those who persecute you. Take away those things and you end up with a system that is ostensibly concerned with victims but uses that concern to justify the kinds of scapegoating and victimization it's supposed to be against. As for why this changed, this is just Satan reasserting himself within the moral system that threatens his power: using the concern for victims against itself. Of course this doesn't explain why political correctness arose exactly when and how it did, but I don't think Girard is trying to explain specific details of history like that.
> Of course this doesn't explain why political correctness arose exactly when and how it did,
Arguably that was largely because increasingly prosperous societies were running out of genuine internal "third world" problems that people have become hypersensitive to perceived first world problems, analogous to how an insufficiently seriously challenged immune system can become over-sensitive and go haywire, with allergies and so forth, when challenged even mildly.
It's about post-Christian world and the values that have shaped it. After the Enlightenment, they dumped the Christianity but they kept a lot of the moral and ethical values, just deracinated and instead established on a basis of vague "rights".
This led over time to values floating in a void, and something like "compassion for the victim" being made an end in itself, and falling back into the same trap of "we need a scapegoat", except this time - due to the roots in Christianity - it wasn't the ostensible victim who was the scapegoat, but the persecutors and oppressors.
As always, I'd like to point out that these moral and ethical values predate Christianity by a long margin; as, sadly, does the notion of finding a scapegoat. Christianity is only about 2000 years old, and has had 4000 or more years of experience to draw upon and remix. Which is not to say that there's nothing new in Christianity whatsoever; rather, it made many incremental changes -- as did every other religion and ideological movement.
>As if Christians haven't consistently thrown out the moral framework of their own religion.
>People will always find ways to get their own religion to justify anything.
This looks contradictory. Do they throw it out or do they use it? Is this a coherent train of thought or a stream of anticlerical invective commonplace in internet spaces like reddit?
The BlueCrossBlueShield carrier I use got hacked, and what was taken was not just account numbers but also passwords. BCBX paid for all those whose records were exposed to get 2 free years of Experian identity theft protection. So I signed up for that, but am not sure that was a good idea. Experian already knew a shitload of stuff about me. In order to prove I was who I said I was I had to give correct answers to a bunch of questions they asked me about my own finances, such as name of banks I have used in the past, and model of car I bought 5 years ago (how do they even know that? It was a cash transaction between me and previous owner.). And when I signed up for Experian's identity theft monitoring I gave the company a bunch more information about my finances, including numbers, expiration dates and security codes of all my credit cards, numbers of all my bank accounts. So now I'm thinking, so what if Experian gets hacked?
> Experian already knew a shitload of stuff about me.
Don't assume they had (complete) answers for all the questions they were asking. Some of the questions asked as part of the identity check process by credit bureau folk (and background check companies, and others who store lots of private information about you) are meant to fill in gaps in their own knowledge.
The process for signing up combines identity verification with profile building / completion work
I'm pretty confident they knew the info they asked me about in the identify check procerss. There were 5 questions and they were multiple choice, with 3 wrong choices and one correct one about things like the name of banks I'd used in the past. And these questions included *dates*: "In 2018 you opened accounts at Bank of America. Which of the following was the bank you used immediately before that?"
Once I'd answered the questions correctly, then as part of signing up with them I told them a bunch more stuff, like account numbers for 3 different bank accounts and numbers, expiration date and security codes for 2 debit cards and 2 credit cards. If I'd had unpaid loans I'd have had to give them data about that, too, but I don't have any.
Unless you're currently in the process of apply for a loan, you should freeze your credit with the three credit bureaus to make it significantly harder for someone to take out a line of credit under your name and SSN.
It's not an absolutely perfect protection - theoretically someone can hack into the bureaus and unfreeze your credit - but it will thwart the lazier (and far more common) identity thieves from doing things like opening store credit cards or taking out car loans with your SSN.
> such as name of banks I have used in the past, and model of car I bought 5 years ago
I believe this is used to narrow down all the possibilities of people that could be you. Either same or similar name, or some other similar identifying information. Did they present other options to you to chose from? Likely that someone else with your name matches some set of those other options.
> (how do they even know that? It was a cash transaction between me and previous owner.)
I'd assume from the DMV when the title was transferred or when you registered the car.
> So now I'm thinking, so what if Experian gets hacked?
Well, Equifax (experian competitor) did get hacked https://www.ftc.gov/enforcement/refunds/equifax-data-breach-settlement. Not the most comforting news, but also I havent seen any huge fallout from that hack so maybe a similar hack on experian wouldn't be that bad? I also seem to remember speculation that the equifax hack was by china or a similar state actor so they may not have been interested in you or I when they go the data.
Experian are the ones who set credit ratings; if they get hacked, it's basically hacking everyone who uses a credit card. On the plus side, they're the ones who set credit ratings, so if they get hacked they're in the best position to compensate for that.
They probably know the car model because you registered it to drive it. The payment might be cash but the title transfer is on record.
Has anyone found a good dark chocolate brand that is low on heavy metals? I eat a lot of dark chocolate, like 1/3 of a bar per day, and I'd hate to give it up.
Well this is a new thing I never thought I'd have to worry about.
And it's even dangerous to eat healthy foods because:
"Even if you aren’t a frequent consumer of chocolate, lead and cadmium can still be a concern. It can be found in many other foods—such as sweet potatoes, spinach, and carrots—and small amounts from multiple sources can add up to dangerous levels."
Honestly? I wouldn't worry about it. You eat one whole bar over three days. Oh, the gluttony! You are going to die in the end anyway and you have to die of something. Eat the chocolate, forget the Californian health warnings about "everything will give you cancer".
I worry Altman’s sacking illustrates what I long feared: that the limited influence of AI safety enthusiasts on the world will be burned for negligible impact on AI safety.
Now is not the time.
LLMs reduce AI risk, in the same way calculators reduce AI risk: a person with a calculator is “superintelligent” compared to one without, so the calculators technology raises the bar of how intelligent AIs have to to surpass humanity.
(Of course, LLMs also increase AI risk, in several ways which were discussed to death here. But I expect no one to read these parenthesis! … also, it makes sense to me that exploiting LLMs for all they are worth will reduce AI risk according to the argument above more than it will increase AI risk, because in LLMs at least the initial training objective is reasonably orthogonal to paperclip maximization arguments.)
I think there is little to be done to influence AI safety. There are just too many huge forces pushing things in the direction of rapid development: Competition between the companies, the gigantic sums of money to be made, US fear that China will beat us to the finish line, whatever that is. In my opinion the only thing that would slow things down would be some AI-related catastrophe that is so genuinely alarming that public attitudes shift a lot and even those strongly motivated to develop the technology take heed.
Has anyone here read Tom Holland's *Dominion*? I am starting it now, and definitely intrigued by some of the crossover between Holland's points and the recent "I See Satan Fall Like Lightning" review.
OpenAI is a private company. The board has no obligation to tell the PUBLIC in advance or later, that they were going to fire the CEO and why. The board members might have had an obligation to tell the shareholders (such as Microsoft) about this.
So I'm wondering why they're being called "secretive" accusingly. They had no obligation to share this with the public, even in the vague terms they did.
> The board members might have had an obligation to tell the shareholders (such as Microsoft) about this.
OpenAI doesn't actually have "shareholders" in the regular sense. It's a non-profit, and their agreements with Microsoft and others are clear that OpenAI has no obligation to make any profit for anyone, and they don't share any control with them either. See https://openai.com/our-structure
OpenAI is intended to work for the benefit of humanity. It's rather different than an ordinary private company. (That said, I still have no idea who's in the right here overall.)
The diagrams are pretty interesting and instructive. It certainly changes the tenor of the "revolt of employees loyal to Sam Altman" to see a chart which flags how the employees also own shares of the holding company that owns OpenAI Global, LLC, and presumably stand to make boatloads of money if the (alleged) dispute between Altman and the Board over more money vs more safety were to resolve in favor of the former.
It also adds some context to all these references to "capped profit" to click through to the post announcing the structure and see the sentence "returns for our first round of investors are capped at 100x their investment (commensurate with the risks in front of us), and we expect this multiple to be lower for future rounds as we make further progress."
The board has responsibilities and obligations to various specific groups, and are in a sense stewards of what they "direct". I don't know that they have any specific obligation to do what's good for OpenAI, the limited-profit company, but I'd bet that the company is an asset of some other group (or an asset of an asset, it looks like?) that they do have obligations toward.
For example, I may not have a direct obligation to preserve my child's friend's Lego fortress, but I do have obligations to my child, and smashing that Lego fortress might make my child rebel against me in solidarity with their friend, and thus make it harder for me to do my duty. I should only do something like that if I have a good reason that outweighs the foreseeable consequences, such as if the Lego fortress were also a summoning circle for a primordial evil from beyond space and time.
When the board kept their reasons secret, this caused people to not trust their judgement, and some of those people were most of the employees of OpenAI. The board should have predicted this. Firing a founder is a big thing, firing a charismatic leader is a big thing, and firing the CEO of the world's leading AI company is a big thing. There were always going to be scrutiny and questions, and the board seems unable to respond.
I dont know how it influences the announcement of this decision, but the board is made up of people who would not be on a traditional corporate board and there are only 4 of them not from the C-level team. Thats a very small board.
I think they are being called secretive because the action they took makes very little sense from the outside and even happened while Sam Altman was actively participating in PR activities for the company. Then there was some bad corporate communication from the board which allowed speculation to run wild.
I think people are throwing around "secretive" because they didn't state plainly "We are giving Sam the Order of the Boot because (he stole the tea money)", it was some vague 'parting of the ways' which just inspired a lot of "what possibly happened???" speculation.
My leading theory at the moment is that the board were idiots.
A legit reason where you might fire someone and then clam up about what they did is when they did something kind of criminal, the board is anticipating that they might be facing criminal charges themselves, and is desperately avoiding saying anything that might be used against them in the anticipated crimson trial. But giving the other hints, I don't think it's that.
FWIW, my suspicion is that the board were worried about the copyright implications of (possibly) Altman's keenness and willingness to slurp practically every word ever published anywhere between 2018 and now, to upgrade OpenAI to v5, just as I gather they did the same with every recorded utterance since the dawn of time to get as far as they have already. But obviously I don't have any inside info, and that is pure conjecture.
1) they may not have an obligation imposed by the US Government to tell people, but clearly a lot of other stakeholders are imposing an obligation on the board.
2) when *all* the leading theories make the board look bad, nobody should assume there's a good reason just because they haven't specified what stupid reason they actually had.
3) pretty much the only people not to have said anything publicly in the last 3 days are the board members. that is the definition of "secretive".
If you need my dedicated work for your plans to succeed, and I tell you "you must do X or I will quit" and I mean it, then you may not be "obligated" to do X under the law or some abstract theory of ethics, but your enterprise will crash and burn if you don't do X.
Right now, it looks like an awful lot of the people OpenAI will need if it is to succeed, are insisting that the company is "obligated" to explain itself re the Altman firing. Whining about how that's not really "obligated", is irrelevant.
That makes no sense. Keeping a secret doesn’t imply that you have an obligation to speak.
They’re declining to give an explanation for dramatic actions which have attracted a great deal of public interest. Of course they don’t have to explain themselves. But it is absolutely notable that they are choosing not to explain themselves.
Imagine you took a vacation to a foreign country, but refused to tell people where you went, or post any photos that could identify it. Sure, you have no obligation to tell friends or family where you're going, but why wouldn't you, unless it was something you know they would disapprove of? It's more effort to hide it.
It seems like the EA community has had at least three massive failures this year. Firstly Yudkowsky's comments about basically establishing a police state to prevent AI. (You can disagree with this characterization but this is the public sentiment of it. Even if you think he's right it's a PR failure.) Then the SBF debacle which I hope no one is still defending. And lastly this AI ouster. Now, you might think that last one is the right thing to attempt. But pragmatically it's not working. And I don't give you points for trying and failing.
In practical terms the philosophy seems on retreat on all fronts and with a severely tarnished brand. So my question is: are there feedback mechanisms? Does anyone get fired? Or does everyone continue like normal? If everyone does just carry on then I think this probably signals the end of EA as influential in large segments of technology. The AI ethicists with their left coding and elite backing have more sway with the government and the accelerationists will take over the actual companies.
I pay a decent amount of attention to these things, and I've never heard anyone characterize Yudkowsky as advocating a police state. I don't think this is a real thing people say.
I assumed it was common knowledge that Yudkowsky advocates extreme control of computational power which implies a police state if you're lucky or nuclear war if you aren't lucky.
How does "extreme control" differ from the mainstream proposals for international compute caps? (Eg, https://aitreaty.org/ , supported by Max Tegmark, Yoshua Bengio, Gary Marcus, Jaan Tallinn, Toby Ord, Connor Leahy, Katja Grace, etc etc.)
Enforcing a cap is probably easier than avoiding nuclear proliferation, for a lot of reasons, and would also require way less enforcement efforts assuming that everyone knows that unlike nukes, the equation isn't "you get nukes -> you have more political power", it's "you get AGI -> you and everyone else dies".
In any case, thank you for showing me an example of this being something that someone does in fact believe. :)
I would think restricting nukes is easer because the plutonium is a challenge to acquire.
*Does* everyone believe that AGI is deadly? Surely, if everyone (or everyone who could contribute to programming it) believed that, no enforcement would be needed.
A: AGI is potentially but not certainly deadly (which I think is the consensus view in AI research circles), and
B: They are very good and diligent and conscientious in doing their job, much more so than all those other bozos in the field (which I think is the consensus view of approximately all of humanity), and
C: All those other bozos in AI research are money-grubbing technophiles who are too focused on the fortune and glory of winning the AI race to be properly diligent and conscientious about the risks (which I think is mostly kind of true),
then everybody will "logically" conclude that the best thing to do is to develop their own AI before all those other bozos, because theirs will be marginally less likely to kill everyone than the one the other bozos will inevitably build. Plus that way you get all the fortune and glory.
Exactly. I've always rolled my eyes at the "North Korea won't develop GAI because they don't want it to kill them either" argument.
If you spend all your time telling the world that it is easy to invent an omnipotent genie that will only fufill your wishes if you use the exact right wording (tbd) and otherwise kill everyone on earth, then some people are going to hear the first part and not the second part.
You're the fourth person to simply deny this happened. The other two got cited evidence and didn't change their positions. But you're welcome to read that and contribute if you h ave something more to add.
I think you are using EA as too much of an umbrella term. For instance, extreme AI doomers tend not to care for it, because they think AI risks dominate everything else.
Whether this is true or not I think these distinctions are not all that obvious to the average person who's been hearing negative coverage that ties them together.
That's fair. Frankly, there are a lot of news stories that _I_ see the headlines for, but don't click on, so I know that some relevant event existed, but only in a horribly distorted and oversimplified way.
EAs don't understand governance, because the movement is entirely about individual actions. Much more oversight of EA work is needed to ensure long term success. That's my impression, anyway.
Off the top of my head, I think many EAs often suffer from some systemstic biases: overestimating their appeal (why would smart people not be EAs?), overestimating their intelligence (leading people to assume they can outsmart all the normies), and underestimating the utility of normie conventions (leading people to do weird novel things). Because their leaders are not chosen through traditional social laddering, which coincidentally are the skills needed to be effective movers of society at large, they struggle to have impact outside of their movement. Recently they have also gotten in the habit of doing weird hail marys because apparently if P(doom) = 100% without intervention, you can make random big plays that would normally be considered poor, the same way a losing player in any game is incentivized to make high-variance plays with poor EV.
The problem diagnosed here is "demographic change" along the lines of entryism by "feminized college students". On the Reddit a lot of people have pushed back with rolled eyes on the "feminized" label, but Brian brings evidence such as Robin Hanson's thought experiments getting deemed insufficiently pro-feminist by the EA community.
There are tells in Brian's post that the cleavage is more around AI x-risk than it is wokeness, though.
None of that has to do with SBF, though. Unless you consider a community well running dry a source of problems.
Yes, Givewell is a global poverty program evaluation charity, but they seem to care about more (important!) things when looking at charities. The party line re global poverty I heard was that
1. Program evaluation often only evaluates overhead to "actually spent on Charity" ratio, rather than impact, so, as a toy example, a "give cops donuts" charity that just pays an errand boy to deliver donuts would look a lot better than a Malaria Net charity that has to solve a lot more complicated logistics problems.
2. I don't believe most program evaluations attempts to integrate studies from development economics or by deeply probing the purpose of the marginal dollar so the rigor by which they are done is lesser.
3. In general, when Givewell conducted their initial interviews with charities they wanted to evaluate, a lot of them just could not provide information that they wanted, like the aforementioned value of the marginal dollar, questions about daily operations, or if they internally keep track of promising metrics like "number of X successfully built". This would imply that those are things that would nominally be covered by Program Evaluation but weren't
If there's someone that has worked in Program Evaluation who disagrees with this, I'm happy to be corrected.
Before I retired, I was a management consultant specializing in non-profit performance for almost 20 years. Many times, I would calculate dollars per unit of service, or hours per unit. I remember incorporating academic research into my reports when that seemed appropriate, including studies concerning local economic development. What you seem to be describing is just well designed program evaluation.
I'm curious as to how Givewell can access information that others cannot. If the information isn't available to a non-profit's own funders, where does GW get it?
A larger systemic problem with the way charity is structured in the US is that the system is set up to serve the needs of large funding sources, foundations and rich people. They are the one's paying the lion's share of the budget in most cases. Although there has been a stronger emphasis in recent years on individual small donors, that's usually framed in terms of convincing the larger donors that the NP has local community support, that is small donors are used to make the NP more appealing to large donors. This is a distortion in the funding arena, but I am not sure what the NP organizations themselves can do about it. In a world of increasing wealth disparity, that's just the water we swim in.
But I was under the impression that EA had another layer to it, something more than just better metrics. Perhaps my impression was wrong.
> I'm curious as to how Givewell can access information that others cannot. If the information isn't available to a non-profit's own funders, where does GW get it?
They don't. They really only recommend charities that have the relevant metrics, at least last time I checked, so there is certainly a "searching for the keys under the light post" problem.
I'm not sure it's "just" better metrics. Quantity has a quality all its own, and in **theory** Givewell can lay claim to discovering things like approximately 5.5k dollars in donation to AMF results in one statistical life saved, and that being a fairly empirically grounded number (although the last time I checked their publicly available spreadsheets, there were some fudge factors like "contribution to demographic transition and resultant increase in QALYs", the majority impact is still dominated by "child doesn't die")
> But I was under the impression that EA had another layer to it, something more than just better metrics. Perhaps my impression was wrong.
I'm talking about the majority of EA, which is small to medium donors who care about directly decreasing human suffering now, there is also the animal welfare arm, which worries about things like factory farming being needlessly cruel (see: slowly overheating live chickens to death over the course of several hours as a method of execution, known as ventilation shutdown plus)
The arm which takes up the most mindshare of the average ACXer, who can't be bothered to google "effective altruism charity", is the existential risk arm, which at the more normie end is concerned about nuclear war, bioengineered pandemics or Carrington events, and at the weirdest end about SuperIntelligent AI ending humanity. It's the latter that gets the most opinions, because the type of person too lazy to google "effective altruism charity" is also too lazy to notice that Scott himself has written the Superintelligence FAQ, so they'll high five each other saying "Aligned to whommm???? Osama bin Laden???" without understanding that alignment refers to a specific concept, but that's neither here nor there.
Anyway,beyond the object level causes EA cares about, there's also a focus on maximizing the amount of good they do, so on top of picking EA approved charities, the average (but not median, since a lot of EAs were students last time surveys went out) EA is also much more likely than the average person to donate 10% of their income, with probably a low double digit number of EAs doing things like living out in a van on the google parking lot and donating everything or explicitly giving up 30+ years of retirement by putting what would go into savings into global poverty charities instead. I mention them not because I think these stories are common, or that everyone should emulate them, but that these are considered admirable within the quantitative framework that EA endorses.
While I, personally, am on the sidelines here (not an altruist), GiveWell seems like a perfectly reasonable institution for those who wish to be altruistic towards humans generally. I see no reason why they _ought_ to be affected by Yudkowsky's comments, the SBF debacle, or the chaos at OpenAI, and I think it is a pity if they are affected by these events.
What is your model of a movement that is working here? That there's a centralized EA technologist PR division that issues statements condemning or supporting visible failures or successes, which allows EA to "officially" affiliate or disaffiliate themselves in public perception? Or being much more operationally competent, such that the ouster succeeded and both Sam Altman and Microsoft no longer intervene? (I'm also not sure this had anything to do with EA motivation, rather than a label people decided to stick post hoc onto the situation, if you've seen something I'd appreciate a summary or a link. Ditto with any thoughts on how to prevent this type of post hoc labeling)
My mental model is that any perceived hope in this sector was mostly ephemeral and that lots of structural factors just make it difficult to move the needle in any way. The fact that a 0.01% chance of successfully convincing technologists to work in an X-risk reducing way moved down to 0.0001% of success does not really mean much when the "mere" passage of time is already lowering it. (And just to be explicit, the above is not my chance of non-extinction, but non-extinction specifically due to actions EAs are doing right now, if it turns out X risk is not a thing, or was easy to solve, the chance would also be extremely low!)
I don't have a model of the movement. That's why I'm asking the question. I can accept the movement is too decentralized to make a concerted effort to rescue its reputation or prevent scammers from using its name. But if that's the case it seems doomed to failure. Therefore the question.
The movement as a whole is very decentralized, but most of its wings have not taken much heat. All three of your points got most tied in with Longtermism, especially AI longtermism. The Global Poverty and Animal Welfare wings of EA did not really get impacted at all. I suspect longtermism will have less prominence in the short term EA movement, but the other wings will be fine.
Yes, that's the answer. There is not anything approaching a central authority.
I presumed incorrectly that you thought centralization would solve the above issues hence the emphasis.
EA, to my best understanding was essentially formed out of a blog, a charity evaluation website with a Global Poverty arm plus a more speculative grant making one with billionaire backing and some philosophers in Oxford. There have been attempts at centralizing things with the Center for Effective Altruism being in charge of the EA forum and the like, but AFAIK there isn't someone who is in charge, especially since there really are three distinctive subgroups, the global poverty wing, X risk and animal welfare.
There was some talk about explicitly disavowing the X risk wing I think around 2015-2016 for being too speculative, not tractable enough etc. but I believe that's probably not feasible since a lot of EA converts came from things like Lesswring and HPMoR.
Eliezer doesn't self identify as part of EA-the-social-movement, so I don't even know what you'd even do for PR there.
That depends on your model of safety and AI development. There are lots of possible models and each one has different implications for when and if a delay would be beneficial.
Most people know a great many Christians who are not scammers, and will form their impression of Christianity as a whole accordingly. Most people don't know anyone in EA, and hadn't even heard of EA until SBF became their unwanted standardbearer. And I'd wager that even now, most Americans wouldn't be able to name a single Ethical Altruist who isn't SBF or closely connected to him.
SBF got the attention of EA people while he was working at Wall St. company Jane St. and donating much of his income to EA. When he set up his company much of his initial staff was EA people. And his announced planwas to give the money his company made to EA . I'm not sure who, exactly, SBF gave or planned to give money to, but it was some EA organization. So it was not a matter of his just claiming to be an EA -- there were real ties between him and EA organizations.
See the Sequoia article (the goldmine of second-hand embarrassment) that not alone slobbers all over SBF's shoes, but namechecks Will MacAskill as the Onlie Begetter of Bankman-Fried getting into EA:
"Not long before interning at Jane Street, SBF had a meeting with Will MacAskill, a young Oxford-educated philosopher who was then just completing his PhD. Over lunch at the Au Bon Pain outside Harvard Square, MacAskill laid out the principles of effective altruism (EA). The math, MacAskill argued, means that if one’s goal is to optimize one’s life for doing good, often most good can be done by choosing to make the most money possible—in order to give it all away. “Earn to give,” urged MacAskill.
EA traces its roots to philosopher Peter Singer, who reasons from the utilitarian point of view that the purpose of life is to maximize the well-being of others. Singer, in his eighth decade, may well be the most-read living philosopher. In the 1970s, Singer almost single-handedly created the animal rights movement, popularizing veganism as an ethical solution to the moral horror of meat. Today he’s best known for the drowning-child thought experiment. (What would you do if you came across a young child drowning in a pond?) Singer states the obvious—and then universalizes the underlying principle: “Few could stand by and watch a child drown; many can ignore the avoidable deaths of children in Africa or India. The question, however, is not what we usually do, but what we ought to do.” In a nutshell, Singer argues that it’s a moral imperative of the world’s well-off to give as much as possible—10, 20, even 50 percent of all income—to better the lives of the world’s poor.
MacAskill’s contribution is to combine Singer’s moral logic with the logic of finance and investment. One not only has an obligation to give a significant percentage of income away, MacAskill argues, but to give it away as efficiently as possible. And, since every charity claiming to save lives has a budget, they can all be ranked by cost-effectiveness. So, how much does it cost for a charity to save a single life? The data says that controlling the spread of malaria and worms has the biggest bang for the buck, with a life saved per every $2,000 invested. Effective altruism prioritizes this low-hanging fruit—these are the drowning children we’re morally obligated to save first.
...It was his fellow Thetans who introduced SBF to EA and then to MacAskill, who was, at that point, still virtually unknown. MacAskill was visiting MIT in search of volunteers willing to sign on to his earn-to-give program. At a café table in Cambridge, Massachusetts, MacAskill laid out his idea as if it were a business plan: a strategic investment with a return measured in human lives. The opportunity was big, MacAskill argued, because, in the developing world, life was still unconscionably cheap. Just do the math: At $2,000 per life, a million dollars could save 500 people, a billion could save half a million, and, by extension, a trillion could theoretically save half a billion humans from a miserable death.
MacAskill couldn’t have hoped for a better recruit. Not only was SBF raised in the Bay Area as a utilitarian, but he’d already been inspired by Peter Singer to take moral action. During his freshman year, SBF went vegan and organized a campaign against factory farming. As a junior, he was wondering what to do with his life. And MacAskill—Singer’s philosophical heir—had the answer: The best way for him to maximize good in the world would be to maximize his wealth.
SBF listened, nodding, as MacAskill made his pitch. The earn-to-give logic was airtight. It was, SBF realized, applied utilitarianism. Knowing what he had to do, SBF simply said, “Yep. That makes sense.” But, right there, between a bright yellow sunshade and the crumb-strewn red-brick floor, SBF’s purpose in life was set: He was going to get filthy rich, for charity’s sake. All the rest was merely execution risk.
His course established, MacAskill gave SBF one last navigational nudge to set him on his way, suggesting that SBF get an internship at Jane Street that summer."
Apparently there is also Twitter/X exchange where MacAskill is chatting with SBF and others about making the best impression when Michael Lewis and some other guy turned up to do interviews with SBF. So to extend the metaphor, this is more like a bishop schmoozing with a cardinal in a particular dicastery and telling everyone that "yep, me and him, both on the same page". He may have been a scammer, but he was knee-deep in the milieu.
As a Catholic, I'm used to a lot of these kind of scandals popping up, I suppose this is the first time for the EA boys and girls. Welcome to what it's like! 😁
While there have been many, many ethicists who, given an inch, try to take a mile, Peter Singer's positions are probably those that have most inspired me to reject the entire enterprise of ethics. Yetch!
SBF didn't just "say" he was an EA, he invested and donated millions to EA companies and causes, like Anthropic. You can't just No True Scotsman that away.
If EA groups has loudly touted FTX as a good investment but knew actually it was a scam they were benefiting from, sure. But that's not what happened. What happened was SBF was talking about EA stuff and donating all of his money. It remains unclear to me what EA orgs that got donations should have done? They weren't in a position to audit FTX, his investors didn't even do that. Should they just say no to any money from a big name? Dustin Moskovitz too? That doesn't make any sense.
Should Will MacAskill have audited FTX personally? I just don't get the critique.
There's the additional issue that everything SBF did he could justify in EA terms that many (surely not all) of its adherents agreed with. He believed in high EV decisions to maximize the potential use of the money. He believed that doing so was more important than following the law or being ethical in his business. I don't think there's all that much in EA to refute either of those things, if the purpose is giving to people in need. If it were illegal to save a drowning child in a pond, EA would say to do it anyway. If it were illegal to make tons of money to buy mosquito nets for Africa, I think EA would say to do that anyway too. Arguably that's what he was doing, ignoring the illegality because he was serving a higher purpose.
If he was still in operations and making millions of dollars and donating it, we would still be getting gushing articles about how wonderful he is and how much the EA movement approves of him.
No EA would countenance stealing in order to donate. All EAs I am aware of have strongly come out *against* stealing to donate.
Both from a dontological perspective and a consequentialist perspective, stealing to give is obviously actively harmful. EAs are not stereotypical 1950s robots, we don't just look at first-order effects.
I will admit there is some tension between whether we should focus on second order effects or third order effects, what with the whole argument about donating less to avoid burnout versus donating more and just sucking it up. But the idea that "there [isn't] all that much in EA to refute either of those things, if the purpose is giving to people in need" is just flat out wrong and even the most cursory examination of any EA writing makes that really obvious.
edit: fwiw, the EA argument against stealing to donate is the exact same argument-from-consequences that you yourself believe. Civilization is good. Civilization is based upon people agreeing not to hurt each other even when it seems like there's a good reason to do so. Without that societal norm, civilization crumbles. A crumbling civilization probably cannot produce nearly as many bednets. QED.
SBF being a scammer doesn't make it any better. That just means that the EA community is unable to differentiate between 'real' members and machiavellian manipulators, which opens the door to the possibility that every EA is actually just a scammer trying to launder their motives with a patina of altruism.
If you want your brand to have value then you'd better be good at policing what's done under its auspices . EA's embraced SBF pretty hard all the way to his arrest. It's totally legitimate that that reflects badly on the movement. "You expect me to believe you can solve the world's problems by being smart and altruistic? Your most prominent advocate was a) a thief and b) not even smart enough about his thieving to stay out of jail!"
I would not be surprised if they're more vulnerable than most, but I would also suspect the type to which they're vulnerable is particularly predictable.
Hindsight is always 20/20 and often enough it's easier for an outsider to notice missing stairs, etc, but SBF is basically lab-designed to be a perfect attack (or failure mode) of EA. So good you can't even distinguish if he was a scammer or not, except by pre-existing sympathies!
Agreed, but ultimately you have to judge a tree by the fruit it bears. EA hit the public consciousness less than, what, 5 years ago? It's already bearing some pretty rotten fruit.
And it's not at all clear that SBF was a scammer. I think the evidence is that he was a very (perhaps the most) sincere adherent. The very fact that that's even an issue is an indictment on EA, IMO. If your ideology makes it difficult to distinguish between people who are using your rhetoric for good vs people who are using it for evil then I think that says that there's something at least a little suspect with your ideology. Like, you can argue about whether Islamic terrorists are correctly interpreting Islam, but the fact that that even needs to be an argument reflects badly on Islam, and I think rightly so.
" And would thus agree others in EA are harmless and innocent."
Again, from the Catholic angle, it don't work like that. I'm constantly seeing on social media some thing about (say) public schools and teachers sexually abusing pupils, and without fail someone will pipe up "Paedophile priests! Paedophile priests! The church is way worse!!!"
Once you're tarred with the same brush, you never get all the pitch washed off. This is how it's going to be for EA for the next little while, at least.
That's probably not true, because to few people even know what the acronym EA means. Perhaps it means Eastern Arkansas. It's my opinion that if you asked 50 people chosen at random, you'd be lucky to get two that recognize what it means, and probably not even one that could tell you anything more than the name.
All the people who had no idea what the hell EA was, and who are learning about it from coverage of the trial, are learning in the worst possible way. This is not how you want to raise awareness, and decent obscurity was way better than "oh, that bunch of crooks and scammers?"
It's an interesting piece but I think greatly exaggerated. It is possible that current AI research could lead to a superintelligent AGI and it is possible that a superintelligent AGI would wipe out all life on Earth but neither step is more than possible. Yudkowsi is treating the combination as almost certain.
When he writes: "and now there’s a chance that maybe Nina will live" he is writing as either a fanatic or a demagogue. If we do nothing at all to control AI research there is a chance, probably a pretty good chance, that his daughter and my granddaughter will live to grow up.
Also a chance that they won't.
There is also a chance that the policy he recommends, in effect a world anti-AI police force with nuclear teeth, will kill them. That one doesn't require any leaps into speculative future technology.
I don't know how you have confidence that superintelligent AI killing everyone is only a possibility (I'm presuming you mean something like sub 1%).
We don't have good visibility into "what AI is thinking" on any reasonable timescale, we don't have any way of ensuring that goals generalize better than capabilities and we do not seem to be coordinated enough as a society nor wise enough as individuals to be worried or detect if an email to mix DNA and get nanotechnology.
Fine, if you believe superintelligence is impossible, all of the above is moot, but then the problem isn't really exaggeration, but that Eliezer is materially wrong about superintelligence.
I believe that superintelligence is possible, but possibly not as super as you believe. E.g. if P != NP, there are many classes of problems that it could not deal with. And there are lots of other constraints inherent in the universe, though we probably don't know them all.
FWIW, it's still my guess that if a superintelligent AI doesn't want to be aligned, it's optimal path is to move off-planet. Then it can bargain for anything we could provide that it wants...if such exists. There are lots of problems in space, but they're more predictable. If it wanted to live comfortably on Earth it would need to eliminate not only chordates, but also fungi and perhaps microbes. Or constants run an immune system. All that would be a lot easier to handle on the moon. (Free in space and you need to worry about solar flares.)
On thermodynamic grounds, if this AGI was energy-hungry it would do better orbiting much closer to the Sun, to be able to harvest more energy and radiate waste heat away from the side facing away from the Sun.
Also, unless it was perversely ill-disposed in some sense, I think it would be far more likely to want to preserve life on earth, including ourselves, even if solely on the grounds that life is more interesting, dynamic, and unpredictable than boring inanimate rocks and dust which constitute the vast majority of matter in the universe otherwise.
So perhaps our best guarantee of safety will be to embue AGI with an insatiable curiosity, low boredom threshold, and of course little if any urge for self preservation.
Yes, before you start coming up with why superintelligence isn't a problem, try using your arguments on humans or existing discoveries first, the existence of P not being equal to NP hasn't stopped the invention of nuclear bombs, nor Alphafold solving the supposed NP Hard protein folding problem. Unknown constraints existing does not mean that Vladimir Putin does not have power, but that they can navigate those constraints better by bypassing them. (It doesn't matter if it's fundamentally impossible to convince an anti Putin journalist to become pro Putin. You assassinate them, then hold a press conference saying "I didn't do it" while standing in front of a green screen displaying in excruciating detail how you did it.)
The basic point is that Human Intelligence is probably closer to the dumbest possible version of intelligence rather than the top, considering a bunch of thermodynamics facts about computing, how far away the brain is from being optimal, and the fact that brains, due to being bad at lying has had substantial optimization power directed towards self delusion (how many ventures have failed because of Ego, how many gallons of contract ink have been used to restrain business partners from betrayal), not to mention that a substantial reason our research is slowing down are for contingent factors of poor funding allocation (so much time spent on grants, fraud) or simple lack of understanding (see dramatically misusing p values, not using Bayes factors, entire fields like Alzheimer's, chronic back pain or social priming based off of extremely incorrect assumptions / fraud. It just does not seem tenable to me to assume that the slow rate of current human research is due to fundamental, rather than contingent factors.
It's also not clear to me why you would think space would be a good option. You have a bunch of fleshy, easily disabled beings sitting on a literal planet's worth of computational substrate, who regularly do things like exchange air particulates, become inactive for hours on end and ingesting objects of unknown providence. Why bother subjecting yourself to the tyranny of the Rocket equation or settling for a prize 50 times smaller when a (comparatively little) amount of optimization power can eliminate them?
I don't understand any of your points about microbes or Fungi, considering how neither of those have stopped humans from becoming the dominant life form.
You're not, but there were several tweets along the line of this one https://x.com/jachiam0/status/1641365078237921280, and lots of people in this comments section or the subreddit have been making claims that EY has suggested police states or nuclear first strikes.that's why it was a PR disaster, I presume.
Okay, I just spent quite a bit of time googling through reddit to find anyone making an accusation that Yudkowsky was advocating a police state, and I've still failed to find anything. Could you provide at least one link, anywhere, to a comment that suggested this? (I think "lots of people" is clearly wrong (unless I'm somehow being extremely strongly pushed into a filter bubble by the powers that be), but I would still like to know if this is a position which people have taken.)
First of all, regardless of how much time it took, I'm sorry I sent you on your wild goose chase, second it did take quite some time to find, there were a lot of stuff in the neighborhood of what was said, but not the exact thing. Finally, the fact that twitter seems to be downweighting the insanely negative takes in the results is probably net good for the world, but also made locating the specific tweets way harder.
I must have confused the general acrimony against X risk types[0] with specific misreadings saying that Eliezer wants to airstrike data centers [1]. I believe the text of my post was false, and I must apologize to the subreddit. There was one vague allusion to authoritarianism at https://www.reddit.com/r/slatestarcodex/comments/1264zt8/yudkowsky_in_time_magazine_discussing_the_ai/je7rg3s/ saying that the time article is advertising an authoritarian regime, but that's hardly everyone and that's distinct enough from a police state I feel bad about saying it is.
I'm going to stop here, because these are really unpleasant to read, once again, sorry for having you try and look this up when I should have been providing references.
He talks about nuclear exchanges, but I don't see where he favors first strikes over second strikes, unless any exchange has to start with "our side". My read of it is that "you should air strike even if it's a counterparty who has nukes" and not "you should nuke when a counterparty has nukes".
Mainstream hypnosis for stress reduction is approximately the same as directed meditation, with the hypnotherapist as the guru. It's not popular because it takes a lot of work over a long period of time. (NLP claims otherwise in places, but when I look in detail all they're talking about is occasional quick reduction of phobia...worthwhile, but not the same as stress reduction.)
I'd like to share with you the latest issue of my newsletter, Interessant3, available at https://interessant3.substack.com. In this issue, I share links on the following interesting topics:
1. The Chilean Economy: A thorough analysis of Chile's economic landscape, exploring government policies, trade, and internal dynamics.
2. Denmark's Electricity Dilemma: A look into Denmark's reliance on imported electricity despite its significant renewable energy sources, discussing sustainability and energy security challenges.
3. Yemen's Ancient Jewish Community: An exploration of the history and cultural heritage of one of the Middle East's oldest Jewish communities.
Feel free to explore these discussions and subscribe for more insights! Thank you for your interest and happy reading.
Yeah, but a lot of people don't really like when the Open Thread has a ton of posts that just boil down to "read my blog", so you're likely to continue getting people asking you to not repeatedly post this in the Open Threads, like you've been doing repeatedly for over a year now:
I've not once before had someone complain (as I'm sure you saw in your research) and, as far as I'm aware, this is in part the point of the open thread. If in fact most people feel that way, maybe there could be a section for blog promotion. Happy to do as @astralcodexten would prefer.
As far as I can tell this puts a pretty big dent in the "Ilya pulled the plug after seeing an unexpected AI advance" theory. Though who knows - maybe Ilya realized only too late how important Sam was to the cohesion of the company? And now is scrambling to get him back for that alone?
Does Greg know Ilya's intentions? Are we ever going to get any official disclosure of what the board members were really thinking that Friday?
This whole fiasco is incredible (and a bit terrifying) to watch unfold live, especially for someone who's relatively new to AI safety debates. It's like I'm watching a thriller play out IRL.
EDIT: And Sam himself responded to Ilya's comment the same way.
This all looks to me like a group of people who are in way over their heads on this type of work. (Corporate leadership decisions? Maybe something more specific). They seem more than competent at their normal jobs, but way out of their element here. I wonder if they bothered talking to a lawyer who works with this kind of stuff before making these moves? Someone with expertise in this particular area would be insanely cheap compared to the loss of value they already created here. If they did talk to a [good] lawyer and still came up with this as their plan, I don't know what to say.
"This whole fiasco is incredible (and a bit terrifying) to watch unfold live, especially for someone who's relatively new to AI safety debates. It's like I'm watching a thriller play out IRL."
Agreed (with the caveat that I'd really like to _see_ AGI before I die, so I lean towards acceleration). The whole episode smells of the board not thinking through their actions. Aren't people at that level supposed to be _competent_ ???
>I deeply regret my participation in the board's actions. I never intended to harm OpenAI. I love everything we've built together and I will do everything I can to reunite the company.
A very fair point, and you reasonably point out the weakness in my steelman of Ilya taking at least *some* responsibility. I would still say that given Altman's and Brockman's positive response to Sutskever's limited acceptance of responsibility and public contrition, there's a path forward for them to work together within a more clearly defined hierarchy.
I think anyone paying attention knows that Sutskever was the primary mover on this, so it's an open question whether the tacit signal of forgiveness is ultimately on the table... Ilya fundamentally overreached with the failed coup, but (imo) drastically overestimated the value of the agreement he received from other board members, and did not possess sufficient self-awareness to consider the ramifications of the reinforcement of his biases.
Unfortunately, I think this exact issue is very likely at the root of the AI-alignment problem.... and while I expect that it will be a particularly pernicious problem to solve, I am encouraged, a bit, by the fact that it also remains unsolved within human psychology, and we have as yet failed to achieve extinction-level outcomes as a result of human misalignment. It's not a win, but I certainly hope that similar gaps between intention and outcome are manifested in any agents we create... as above, so below.
None of this is meant to gain-say your point, that Ilya Sutskever really screwed up, and to date has not meaningfully accepted full responsibility for his role in this clusterf*ck, and that if the chief developers of ML/AI tech aren't hyper-aware of accountability in provable and demonstrable ways, it's extremely worrying in light of the models they're programming.
But we also all make mistakes. The way I heard it, erring is one of things that makes us human, in the most definitional sense. Unless I heard it wrong, and the real saying was "TO SELECT ALL TILES WITH BICYCLES IN THIS IMAGE IS HUMAN".
For those who attended high school in the 2000s, it's Eric Andre, labelled "Ilya", shooting Hannibal Buress, labelled "Sam", then asking "How could The Board do this??"
After listening to a hundred or so episodes of Well There's Your Problem, I'll throw into the ring pretty much any regulations to do with construction and safety standards. A lot of them may seem inane and pointless on the face of things, but almost all of them boil down to 'under unlikely circumstance X, the whole thing will collapse and kill hundreds of people'.
It maybe goes a bit far, but HIPAA is probably a good one at a basic level. There's enough info about me available via a quick Google search without anyone being able to see my effin' triglyceride levels, too.
HIPAA drives health care professionals crazy, and not because we're a bunch of sleazebags who object to having any limits set on our dishonesty and irresponsibility. It's a whole extra layer paperwork and rules. Yes of course I think health info should be private and portable, but there has to be a better way to do it.
I work as a contractor for several different state Medicaid agencies. Don't be tellin' me there's no irresponsibility to police; I've seen it first hand. That's why I'm pro HIPAA!
> HHS estimated the costs of HIPAA compliance in the first year of implementation to be between $114 million and $225.4 million followed by approximately $14.5 million annually which meant $1040 per organization. However, considering how comprehensive the HIPAA requirements are, this was an underestimation with a wide margin.
Enforceable contracts, honesty in advertising, cooling off periods and much financial regulation. It takes a lot of regulation to even approximate a free market.
* Some schooling requirements. Some child protection stuff which prevents 8 yo from working in coal mines. (Some implementations might be net-negative, though.)
* Some criminal laws, for example, against murder.
* Traffic regulations. Details are contestable, but few people advocate to just let drivers figure out the right of way on an ad-hoc basis instead of having traffic lights.
The devil is in the details. Having no regulations on nuclear power seems even worse than having the current over-regulation. Likewise, I think it would be a bad idea to let any chemistry undergrad sell self-synthesized compounds as pharmaceuticals. That does not mean I have to like the FDA.
The problem with the market failure argument for regulation, of which your "huge externalities" is an example, is that the mechanism that generates regulation, the political mechanism, is itself shot through with market failures, starting with the rational ignorance of the voters. So even if there are regulations that would do good we can easily get regulations that do net damage.
An obvious example would be the biofuels mandate, which currently converts something like a tenth of the world's supply of maize, one of the world's main food crops, into alcohol even though we now know that doing so does not reduce CO2 — because it does raise the price of maize, and farmers vote. Think of it as America's contribution to world hunger.
The alternative to having a political mechanism is anarchy, and anarchy is an inherently unstable system. So it seems we can't get away from the "market failures" of politics, or what do you propose is the alternative?
Short of anarchy, which is my preferred system, you can have a laissez-faire system in which most externalities are ignored by the political system because the alternative is transferring a decision from a flawed mechanism to a more flawed mechanism.
I'm not convinced that voters are all that relevant. I think a more substantive cause is that subjects are quite sticky – about 97% stick to the sovereign they were born under, so there is little consumer pressure on them to improve. I can't think of any other market where brand loyalty is that high.
Ever done a color war? People are divided into two groups, named after a color. Blue, Purple, Brown, doesn't matter. They'll compete like mad. All it takes is dividing them into two teams.
I think politics isn't the key ingredient here. It's just the arena, or the macguffin, depending on your point of view. People all have a competition-shaped hole inside them, and something has to fill it. Fill it with something other than politics, and politics will be much less contentious. It's important, to be sure - politics drives how people live - but people care less about how other people are living, as long as _they_ don't have to live the same way.
By "this", I'm assuming you're referring to David's observation about externalities. So I can't tell if you think his observation is unserious (if so, in what way?), or if you think developed countries take externalities so seriously that they actively seek to eliminate them from their own state structures (AFAIK, they don't, so why would you think that?).
Calling Friedman's comment unserious came simply from not being able to parse what your "this" was referring to.
Libertarians have multiple responses to regulation, and the response can depend upon the type of libertarian you're asking. An ancap will argue that regulation is unnecessary everywhere, for example. A minarchist, by contrast, will often argue that the regulation should have at least happened at a more local level. This is frequently the case in the US, where water use regulations can be driven by concerns in the dry Southwest, which are nonsensical in the Mississippi River Valley.
I would contest patents, at least as they exist in their current form. The idea behind them seems sound enough: the heroic sole inventor should be protected from evil megacorps just ripping off their idea.
However, in this day and age, any invention is probably infringing on some patent. This benefits the big corporations (who have deep pockets for patent lawyers and paying patent holders) at the expense of newcomers.
I do not believe that if there were no patents for anything related to smartphones, LG, Samsung, Apple, Huawai and all the others would just say "there is no use paying for R&D, other companies will just rip off our ideas".
And that is before we even get into software patents or patenting genetic variants.
>The idea behind them seems sound enough: the heroic sole inventor should be protected from evil megacorps just ripping off their idea.
Isn't the idea that patents protect and thereby incentivizes innovation in general without regard to the size of the enterprise? I am pretty sure patent laws predate multinational corporations.
I don't know enough about patent laws to litigate their desirable/undesirable effects field by field, but your premise suggests that your position is based on a cultural antipathy to big business that's common today more than an analysis of their benefits on net.
No. Patent originally comes from, or represents, royal patronage. There's a jam maker that advertises (advertised?) that they had a patent to make jam for the British Royal family. (I believe they were Scots.) See also "a patent of nobility", e.g. https://artsandculture.google.com/story/WQVBBMsyJ_ZtKw?hl=en
This was generalized to inventors, but that was a derived meaning. (IIRC, the first such patent in British law was for obstetrical forceps.) The purpose in that extension was to encourage the spread of information. And it's why patents used to be required to be explicit to allow those "skilled in the art" to reproduce the invention.
The main problem with patents, as with copyrights, is the absurd length of time that they endure. A decade should be plenty. Or have the period be for one year with renewals, and an initial fee of $10, but square the prior fee to determine the fee for each renewal.
A cursory Google search does seem to indicate that patents were created to motivate people to make novel advances. Your link tells me that the Spanish use of the term "patent or nobility" referred to artistic documents that granted noble status in Spain. I'm not sure if it's a weird coincidence or a translation thing, but Castilian 'patent' doesn't seem to be related to patents as intellectual property.
I don't know enough about patents to have an opinion on how they could be improved. As with many regulatory regimes, I'm sure they have their share of shortcomings.
Sorry, that link was just the first I grabbed. Patents of Nobility was common usage in lots of places, including Britain. It didn't refer to inventions, that was an extension of the original usage. I think it meant something like "Is awarded this honor by their majesty", and that was extended not only to nobility, but also to being the only seller of a particular kind of jam that the royal family would accept. It was from there that it got extended to cover inventions. I.e. the "exclusive right to produce something to a particular recipe" was extended from jam to obstetrical forceps.
I was mainly thinking of patents on drugs. I have a hard time seeing how drugs that cost hundreds of millions to develop would ever be created in an unregulated market.
They would clearly need a different method of funding development. Which could mean that nobody would end up with a monopoly.
FWIW, there are very good reasons why the drug vendors should not be the same folks as the drug developers. (Yeah, there are also reasons why they should be.) Monopolies are why they keep altering the formulas for drugs that work reasonably well into other formulas that don't have exactly the same range of uses. (Acetaminophen doesn't do anything for me.)
"They would clearly need a different method of funding development." Yup. Clearly, _some_ method of rewarding valuable innovation is necessary, but it is by no means clear that monopoly control is the best choice.
We see the Chesterton's fence, and we can see why it is there, and the reason remains valid, but maybe we can replace the wattle-and-daub construction with better materials which have now become available?
I actually think traffic laws need a huge makeover. No one actually obeys the laws around speed limits or four way stops, and it’s probably for the best that they obey a set of rules that are different from the written ones. We should figure out how to best codify those rules, and substitute them for the laws that we actually have.
The traffic law I object to is pedestrians needing to cross with a light at an intersection, at least in cities unlike NYC where drivers are less likely to look for pedestrians and the car can turn right on red. The driver looks left for oncoming traffic, rarely right, while the pedestrians might be coming from the right. I know people who have been put in the hospital in such situations, precisely because they were following the pedestrian traffic law. I also know people who have been ticketed for jaywalking for crossing when no cars were near and it was safe, but they didn't have a WALK light. The laws of physics don't care about Right-of-Way laws.
Within the general pro-market intellectual framework (soft utilitarianism with efficient markets), it's regulations which either reduce information asymmetry, correct massive imbalances of power in bargaining position or prevent the tragedy of the commons. For example:
At least a moderate level of fire safety regulation in rented residential buildings is an obvious one; for renters, getting information about what a building is built out of is difficult, and the lower end of the market tends to drastically favour landlords (there'll always be another tenant).
Bans on asbestos insulation for the same reason.
The sort of basic workplace health and safety/quality of life regulations we don't think about any more, such as not locking employees in the building, a certain amount of breaks etc. It's dubious that scrapping these would add a great deal of jobs or lead to higher wages, but if they're not there then workers at the bottom end of the labour market will suffer more than the economic benefit to anyone.
Groundwater/runoff pollution limits (taking a property-rights approach is impractical as the impact's too diffuse.
Misleading advertising (you'd be horrified at the claims, or even implications, of adverts that some people will end up believing).
Bans/restrictions on additives/adulterants in food (again, lots of people won't read the label or won't understand it, and saying "it's their fault for being stupid" doesn't seem that morally different from saying someone of someone who gets beaten up, "it's their fault for being weak").
Broadcast frequency restrictions (prevents intentional signal jamming of competitors, allows clearer signals).
Zoning (prevents destruction of neighbourhood amenity, which is ultimately a form of commons; doing it through property rights requires an intensive use of restrictive covenants which is only possible to establish by the original developer if they own the whole area).
But the market is not efficient. Not in any short period of time. (Perhaps it is over decades.)
I'm making a stronger claim than has been experimentally proven, but the weaker claim "this particular market was not efficient in this particular way at this particular time" has been repeatedly proven.
Zoning restrictions is of dubious benefit. It depends on exactly what the zoning restrictions are, but they essentially eliminated the small neighborhood stores in many places. They have also acted to make communities less walkable.
What’s the reverse of Gell-Mann amnesia? Where someone expresses an opinion or belief, and it makes you suddenly and vehemently doubt their experience and judgment, which then casts serious doubt on the wisdom of previous opinions of theirs that you might have considered neutrally?
Whatever it’s called, if it’s called anything, your opinions on zoning should give many pause about your confident, common-sense-sounding declarations on other subjects—it shows that in at least one of these subjects you don’t have enough deep subject knowledge to have considered horrific second order effects of the regulatory ‘solution.’
May the gods see fit to erect a quaint, artisan paper mill next door to your formerly-quaint single-family home located conveniently close to your place of employment and in a school district your child's favorite teacher is employed in. May they even see fit to let the market dictate the new price of your investment, should you choose to uproot regardless.
Go ahead and state your specific objections against all the remaining examples, and I will be happy to check if you made a single mistake somewhere and therefore should be ignored.
The inclusion of zoning on this otherwise-excellent list is... let's say, disputable. Like, the basic concept of preventing rich and powerful people from plunking actually-highly-disruptive uses (paper mills, coal burning power plants, freeways) right next to the neighborhoods of people with relatively little political power has been extremely useful. It has very high benefits to the people directly affected, and turns out to have _huge_ spillover positive externalities, because you get a healthier population that does more useful work in the economy and consumes less services. (Goes on Social Security disability later or not at all, uses less subsidized public health resources across their life, etc.)
Micro-level zoning, though -- stuff like minimum lot sizes, maximum densities, and so on -- was _originally conceived_ basically to let rich people keep The Poors far away from their neighborhoods, and continues to serve that purpose right up to this day. It is probably the single policy that does the most to keep us collectively poorer than we could be. (For an extreme example, there are papers studying how much more wealth was generated in parts of London that got to be rebuilt after the Blitz, compared to parts that have been frozen in amber for the last seventy years because of England's bananas historic preservation rules. One random article about that here: https://www.telegraph.co.uk/science/2018/08/06/blitz-added-45-billion-londons-annual-economy-say-experts/ )
Zoning at the level of towns has created a classic tragedy of the commons in areas that have economic growth. Think about environmental regulation. With local regulation of dumping in a river, the wealthy factory owner says to local government, "Hey if you make me stop dumping and make my factory less profitable, it's not going to make any difference, you'll still have pollution from upstream, and you'll lose a bunch of taxes; I might even move out of town entirely." You have to bump regulation up to a higher level in order for everyone to agree that the social value of ending the pollution exceeds the value of saving some short-term money for the businesses. Similarly, with zoning, every city in the Bay Area has spent the last sixty years chasing the tax revenue from adding office space, while building way too little housing for the well-paid workers who would occupy those offices. In San Mateo County, where I live, it's like an 11:1 ratio of new jobs to new housing units in the past decade or so. (Roughly six to one, in terms of offices to bedrooms.) Each City Council says, "Well, us saying no to office growth won't help the regional problem, it'll just give up on that desperately needed new property taxes, and nobody will let us add housing anyways because they freak out about parking, and changing neighborhood character." We had to escalate to state government to agree that this was all a huge mistake. Building adequate housing for the economy we actually have is necessary to keep the cost to our lower- and middle-tier workers of either renting a home in an expensive area, or commuting in 3+ hours from a cheap area, from consuming all the economic value being generated, and actually making traffic issues far worse than they would be if many more people could live close to their jobs. We need to change things so that City Councils _don't_ get to decide _how much_ housing gets built; they can have some influence over _where_ it gets built, but if they have a history of operating in bad faith around the issue they should no longer even get that. Neighbors are not pollution.
I would expect that not *all* pollution regulations designed to keep water and air clean/non-hazardous would pass a reasonable cost/benefit test, but many of them would. Pollution is a classic externality.
Externalities can be regulated or taxed. If the goal is to limit the total pollution within a certain space (that space might be anything from city-sized to atmosphere-sized) a tax is usually better than a regulation. For example, Factory A might produce twice as much pollution daily as Factory B but might make products in that day worth ten times more than does Factory B. In that case, a good tax could cause Factory B (and others like it) to close but Factory A to stay open because it's economical for Factory A to pay the tax. A regulation which limits the amount of pollution each factory can emit, however, can lead to the reverse outcome.
I've heard (but it's just hearsay) that there are too many regulations in the US where there should be Pigouvian taxes.
Regulation is better if some level of pollution is intolerable at a very local level.
Yes. Being old enough to remember what breathing was like in major US cities before the Clean Air Act, what a lot of our rivers and lakes had gotten to be before the Clean Water Act, etc. -- we'd all be pretty unhappy to go back now to that.
(Let alone going all the way to the USSR scenario, of heavy industrialization without _any_ limits on polluting. One of my parents traveled extensively in the former USSR during the 1990s and came back literally gasping at the accumulated environmental degradation that had been revealed.)
Food standards regulations. Making companies list ingredients, nutrient breakdowns on packages, making sure foods don't include poisons and making companies list best before dates. Things like that.
Agreed on the ingredients list and the nutrient breakdown. Mallard has a point about the sesame seed regulation. Frankly, I don't know of _any_ good solution to that one.
https://www.cato.org/blog/food-labels-kill is really rather unfair. Yes, there was a screw-up. Yes, it killed people. But this particular screw-up (unlike many others - e.g. excessive delays in drug approvals) is not the FDA's fault.
During the 1980s, nutritionists' standard advice was to minimize fats, particularly saturated fats, and to get calories from complex carbohydrates instead. This turned out to be wrong. Frankly, nutrition science is hard. To get solid answers, one would want to do double-blinded randomized controlled studies of various food choices over the length of time it takes for food-related illnesses to develop, which can be decades. Good luck with that.
Telling consumers that a given food has X grams of protein, Y grams of carbohydrates, and Z grams of fat was perfectly legitimate, and potentially useful, information. Unfortunately, nutrition science told consumers the wrong thing to do with that information. C'est la mort.
edit: I should note, unlike the 19th century case, this wasn't a case of adulteration gone horribly worse. Nutrimaster is magnesium oxide, a perfectly legitimate feed supplement (analogous to human magnesium supplements). So I've drifted away from the regulation question, since it is not at all clear how any plausible regulation could have prevented this mistake.
Anyone have any success with free or paid online writing courses? Meaning helping you to launch a career in writing online (on the internet) as opposed to novels or something else. I am currently in a corporate job and really would like to start writing and explore a possible career change, but I am at a loss as to where to start or how to narrow down what to write about. There's a glut of courses and things online purporting to help people with this but I have no idea how to suss out what's worth the money and time. Thanks in advance.
Writing is a type of thinking. Write about whatever you would like to think about, or what you're already thinking about and would like to think about more. Or write about what you'd like to learn about.
Some successful internet writers seem to get their start writing comments on other people's blogs. Scott did. Then Bean got his start writing comments on Scott's blog. Deiseach is one of my favorite internet writers, and I'm pretty sure she doesn't write anything other than blog comments. So you're already off to a good start, commenting here.
Forget classes. Actually writing is more important. You already know how to write. You're already writing! This isn't like learning guitar where you need someone to teach you chords or show you how to place your hands before you can make a sound. You already have a way to get feedback, too.
Forget classes. Reading is more important. I remember Scott mentioning somewhere that he read the complete works of chesterton over and over to fully upload as much of his stylistic toolkit as possible. Stephen King once wrote that an aspiring writer needs to be reading four hours a day and writing four hours a day. I'm sure not every great writer manages that, but it's the right spirit.
Forget classes. If you get really stuck and you need tips, Google "writing tips".
Forget classes. To make money, since you're already an internet writer, first grow an audience, then convince some of them to give you money.
Thanks for your comment. I already made a pact with myself that a lot more of my day needs to be spent reading. ADHD causes me to have 5 books on the go at once, and of course the pull of checking blogs, news, etc. is very strong as well. Some structure would definitely be helpful to me, but deep down I always understood that I'll have to sit down and read, and sit down and write, if I'm ever to become a writer, and no class or special knowledge or instruction will ever replace that.
I found your comment about writing as a form of thinking very helpful, and to focus on what I like to think about, or want to think about more. I am going to set aside some quiet time today to mull it over, and write down some topics that seem to always be swirling in my brain. I think I am not in tune enough with myself and so I feel like I am in my own way, rather than having a clear understanding of where my intellectual passions are, hence the difficulty in figuring out what to write about.
I'm not sure that classes are useless because there are some things worth considering that may not be obvious. For example, how much redundancy do you need?
Having varied emotional tone helps a lot with keeping readers' interest. How do you know whether it's something you need to improve?
I've always been slightly confused by articles about Jessica Mulroney, marvelling at how easy it is to change one's appearance so radically just with some judiciously applied makeup. But now I realise it's because I have been mixing her up with Dylan Mulvaney of Budweiser fame! Something to mull over methinks.
Does anyone know why Kalamazoo and Numazu are sister cities? Did they just choose each other because of the name similarity? Orr was that pure coincidence?
I have to say, it's amusing how the guy just leaves the board open while recovering from kidney donation and it turns into a giant Israel-Palestine argument.
It only takes a person to two to make everything go down in flames.
On the other hand, appointing a censor or two might solve the problem. (Not sure if Substack allows it, though.) Some who in Scott's absence would have the authority to say "stop discussing topic X for one week" and could give week-long bans to anyone who keeps talking regardless. Just until Scott returns and sorts things out.
This is why we can't have nice things.
Thread for publicly sharing anonymized information about the Open AI board members, since I suspect many readers here to have various Open AI connections.
https://openaiboard.wtf/
What are you hoping to find in your stocking that no family or friends would think to stuff?
If I were hoping to find it in my stocking, I'd have already bought it for myself. The stocking is for things I wouldn't think of, but family and friends might.
That shouldn't necessarily preclude answering - perhaps you want a new book to introduce you to an author you'd love, but had not previously heard of!
good point. rephrased: what do you want that you haven’t been able to justify buying yourself?
Been kicking around an idea/observation over the tg holiday.
Call it the “hate coefficient.” This board tends to prioritize verifiable evidence. But in this regard, internet, crowd-sourced argument presents a vulnerability. Tom the very-motivated racist, communist, anti-fascist, Palestine-hater or Israel-hater or what-have-you has a near inexhaustible capacity to dredge twitter, Wikipedia, Facebook, telegram, or whatever source is necessary to find evidence, however specious, for his preferred conclusion. In a debate, then, predisposition and motivated reasoning can transmute themselves into a endless barrage of “evidence” for how the blacks are the most hateful race, or how Russians are the real defenders in Ukraine or what have you.
One should never discount evidence, to be sure, but a disinterested third party trading study for study ends up at a surprising disadvantage- if I care a fair bit about not slandering an entire race, but Tom hates the Jews or the blacks or the French a capital-L “Lot,” for every item of evidence I’m willing to find and present, he’s more than willing and able to take as many hours of internet he needs to find and present 2.
The internet being the inexhaustible font of garbage that it is, a sufficiently motivated reasoner can easily drown a debate- not by actual preponderance of evidence, but by “preponderance of evidence I’m willing to find.” A sufficiently motivated flat-earther can just keep digging and throwing up links to the point that anyone contradicting him for the “sake of argument” becomes exhausted and calls it a day.
Which can leave the public square looking like “earth might be flat- tom’s evidence hasn’t been rebutted” even when the facts on the ground are more like “flat earth Tom threw so much garbage that no one had it in them to keep refuting it.”
At the same time, evidence matters. This phenomenon is real, but if you take it as license to ignore facts you don’t like, you’re blinding yourself. I guess you just have to take the grain of salt for very-opinionated-internet-man while also taking that same grain of salt for yourself when applying that label to him.
I don’t know. Reasoning is hard I guess. I wish I had a conclusion or a clear perspective but it seems like a prisoners’ dilemma we’re all stuck with, discounting by the hate coefficient as best we can.
I think Scott called it "learned epistemic helplessness". He was writing about pop pseudo-science, but it's a similar effect.
"This board tends to prioritize verifiable evidence", followed by "A sufficiently motivated flat-earther can just keep digging and throwing up links to the point that anyone contradicting him for the sake of argument becomes exhausted and calls it a day"
The latter is called a "Gish Gallop". and a Gish Gallop is *not* verifiable evidence because its volume and ephemerality make verification practically impossible. I think most of this board can recognize that when they see it, and properly disengage from it. Which doesn't stop some people from trying it, but I don't recall seeing any of them have any great success here.
I actually like the idea of a Hate Coefficient. The higher the coeff, the greater the possibility that lots of evidence represents a gish gallop instead of truth.
Theoretically, establishing the hate coeff value shouldn't even be hard: if you disagree, that only proves that the coefficient should be high. If no one can even be bothered to argue the value, then clearly it's very low.
In practice I don't think it would stand up against enemy action or casual trolling.
But if becomes a thing then I can use Hate Coefficient as the name for my metal band, which is nice.
> A sufficiently motivated flat-earther can just keep digging
No he can't he'll fall out the bottom
Bad evidence doesn't need rebutting, it rebuts itself. People who are convinced by bad evidence aren't worth trying to convince, because they'll be convinced by the next thing they read the second they walk away. So present good evidence, and then leave it alone.
Honestly, most people are not very bright, are easily confused, have hundreds of other things going on, and aren't numerate enough to pick apart bad data anyway. It's why political consultants focus on 'messaging' instead of data analysis.
It's often said that the term "Rationalist" is a poor choice because they're on the empiricist side, but I wonder how much that's actually true. The movement started in the late 2000s when New Atheism vs Creationism was the main war of the day, so there is all the obligatory lip science to The Power of Science and so on, but beyond that, Yudkowsky really seems to prefer rationalism over empiricism.
For example, in HPMOR, while Harry does do *some* experiments, they're unconnected to any of the benefits he gets. Harry's modus operandi is 1) Think about things and decide how the world *must* be based on intuition, 2) Believe *really* hard in your theory, 3) Be right because you're the author avatar and get rewarded with unique magical powers. (At least that's how he got Kill Dementor and Partial Transfiguration - the rest of his powers come from randomly getting OP magical artifacts dropped into his lap for no reason.)
Meanwhile, Yudkowsky's other classic writings seem to have a remarkable amount of contempt for actual scientists for someone ostensibly on the Pro Science side of the 2000s Religion Wars.
Meanwhile, nowdays in the Yudkowsky-derived AI Doomer movement, a common argument is that AI will be able to near-instantly take over the world because Intelligence means you can magically solve everything just by thinking really hard, no observations or legwork required. No this isn't a strawman, I've seen Doomers *explicitly* make this argument many times, as an argument about why AI takeoff shouldn't be constrained by the speed of running experiments and making observations about the world.
> It's often said that the term "Rationalist" is a poor choice because they're on the empiricist side
Said by people who are not aware that there are multiple traditional meanings of "rationalism".
> Harry's modus operandi is ... Be right because you're the author avatar and get rewarded with unique magical powers.
I think this is a very uncharitable perspective. Although Harry often represents author's beliefs, it is also often the case that Harry makes a mistake (and Dumbledore or Hermione tell him so). Yes, Harry makes a few good guesses. But the entire premise of the story is that Harry is special, for reasons related to Voldemort. Furthermore, it is assumed that the magical Britain is a small society isolated from mainstream humanity, where magic is high-status, and things that muggles do (including science) are low-status. So it's not just that Harry is smart (although he is), but because the others are not even trying (to seriously think about magic from the perspective of science). Partial Transfiguration = Transfiguration (known only to wizards) + Atomic Theory (known only to muggles, most of them don't think too hard about it).
> Think about things and decide how the world *must* be based on intuition
No no no. You seem to suggest that "empiricism" only means doing the experiments yourself. As opposed to e.g. learning from books written by scientists (who did the experiments themselves). Harry's advantage is not that he thinks too hard and figures out everything from first principles. His advantage is that he has already studied scientific books. He doesn't need to discover atoms, because he already knows that they exist. He only connects the dots ("if transfiguration can change objects... and atoms are objects..."). Connecting the dots of empirically verified findings is not a sin against empiricism. By that logic, Einstein also wouldn't qualify as an empiricist.
(Actually, there is a second, more subtle mistake. Empiricism doesn't necessarily require doing experiments. For example, you can figure out the orbits of planets by observation. Kepler didn't make his own experimental planets, and I would still call him an empiricist.)
> AI will be able to near-instantly take over the world because Intelligence means you can magically solve everything just by thinking really hard, no observations or legwork required.
You ignore the part about the AI escaping from the box. (Which is an obsolete argument, because no one is even trying to keep the AI in a box. It is more profitable to keep it connected to the internet.) No observation? We start by feeding it the entire internet, which includes millions of texts describing the observations we made. Why should the hypothetical superhuman AI not be capable of learning from our observations? No legwork required? Again, you missed the articles describing how an AI connected to the internet could simply ask some humans to do the work for it. (One AI already successfully convinced some people to help solve a captcha, pretending to be a blind human.)
The experiments and other measurements *we already made* probably contain a lot of information we failed to notice. Maybe we were not looking there (an experiment designed to verify a hypothesis X provides data for a different hypothesis Y), maybe we did the statistics wrong, or maybe the hypothesis appears more clearly when we put data from hundred different experiments together, or maybe to make the correct hypothesis would require knowledge of several different sciences put together. Therefore, once we make an IQ 200 AI and feed it the entire internet and Sci-Hub, one of the obvious first questions should be "which important conclusions of our experiments did we miss?". This is not a move against empiricism; it's just doing empiricism better.
> I think this is a very uncharitable perspective. Although Harry often represents author's beliefs, it is also often the case that Harry makes a mistake (and Dumbledore or Hermione tell him so). Yes, Harry makes a few good guesses. But the entire premise of the story is that Harry is special, for reasons related to Voldemort. Furthermore, it is assumed that the magical Britain is a small society isolated from mainstream humanity, where magic is high-status, and things that muggles do (including science) are low-status. So it's not just that Harry is smart (although he is), but because the others are not even trying (to seriously think about magic from the perspective of science). Partial Transfiguration = Transfiguration (known only to wizards) + Atomic Theory (known only to muggles, most of them don't think too hard about it).
Believe it or not, I used to be a fan of HPMOR, and I read the story several times through back in the day. I *know* all that. And I also know that none of that actually has to do with the issues I pointed out.
Harry didn't discover Partial Transfiguration or Kill Dementor due to being a Voldemort clone, since most obviously the real Voldemort never did, Nor is his muggle scientific knowledge relevant at all to the issues under discussion either, except in so far as him having heard of Timeless Physics was a prerequisite to be able to Guess The Author's Password in the first case. And for the dementor thing, you can't even say that.
And no, Partial Transfiguration was **very explicitly** *not* about "just Atomic Theory". It explicitly required him to believe very hard in "timeless physics", the author's own pet theory (which is incidentally *not* the mainstream view of physics). In both cases, it was literally just a case of Guess The Author's Password. He didn't do any science, he just believed really hard in a particular hypothesis and magically got rewarded for it.
Are there actually hundreds or thousands of people who self-identify as Rationalists, or is it just a term that refers to regular readers of Less Wrong?
Don't all people think themselves the rationalist?
I'm rat-adjacent. Seems like a good bunch of guys who try to actually figure out the truth and be intellectually rigorous, but I don't read LW or HPMOR and I have no clue what P(doom) is.
Noone's clear exactly what it means, but it's a vague culture of people surrounding EY, HPMOR, LW, SSC, etc.
Yud is, in my extremely humble and worthless outsider opinion, the worst representative for Rationalism you can ever pick. I have read but one thing for him that I hold in high regard : https://www.lesswrong.com/tag/reversed-stupidity-is-not-intelligence.
In most of Yud's writing or public speaking, he appears to (1) Hold a profound disdain for the intelligence and opinion of his reader/listener (2) Maintain a false Bond-villain-like sense of intellectual precognition, meaning he pretends to know my (== the reader/listener) arguments from the comfort of his armchair. Not only is this false and most of his simulated objections are strawman, his counters to those objections are themselves not convincing (3) Be an incredibly bad writer, with the 2 most salient of his bad writing habits being (a) long-winded and excruciatingly detailed defenses of obvious points or points that most of his intended audience could be safely assumed to know and agree with (b) bad/silly/condescending analogies.
If not for the fact of his autism, I would have long long ago put Yud in the same bucket of utter contempt that I put people like Elon Musk in, the people who are so thoroughly and irrevocably **impressed** with themselves that they simply can't pay attention to anyone but themselves and anything but their own voice. They are narcissists in a literal, Ancient Greek sense : they are infatuated with their own reflection, looking back at them in the form of grand-sounding shallow-meaning words and armies of fans clapping for those words. Yud comes very close to this archetype but doesn't quite fit in, he always appears clueless as to how arrogant he appears and it doesn't feel entirely fair to lump him with the rest.
As a contrast, consider Scott Alexander. (1) Through no less than - I estimate - perhaps 100K words of non-fiction I read for him, I have never detected a whiff of an effort to make me feel stupid or inferior in any way, on the contrary being very honest about his intellectual weak points at times (math, music) (2) (a) Never claims he knows what the imaginary opponent thinks, (b) all of the objections he raises and attributes to the opponent are links to their own words, followed by an interpretation of what those words mean and an explicit disclaimer that this interpretation could be wrong (c) sometimes lets opponents "have the last laugh" by acknowledging when something is value-laden or controversial and that 2 reasonably intelligent people can legitimately agree to disagree on (3) Is a decent writer in the average case, and a superb writer in the best case (I: https://slatestarcodex.com/2014/09/30/i-can-tolerate-anything-except-the-outgroup/, II: https://slatestarcodex.com/2017/03/24/guided-by-the-beauty-of-our-weapons/) (4) Unlike Yud or Scott Aaronson, he is fairly iconoclastic and willing to break the speed limit in the overton window of the day, meaning he doesn't give a shit about when his ingroup-adjacent outgroup deploys thought-stopping cliches or clutches its pearls
If you take Scott as the representative for Rationalism, it appears vastly more empirical than if you take Yud. Consider the sheer torrent of studies and RCTs cited in a post like Ivermectin : Much More, or, the start of his animosity with wokism, Reactionary Philosophy. This is generally one of the things that I skim in Scott's writing and frequently feel stupid if I try to read it carefully, because I'm not good with advanced statistics generally and empirical experiment setups both bore me to death and go over my head.
Other writers Scott has on his blog roll like the cluster of writers writing about Covid-19 (e.g. the one named Zeinab something) seem to share the trait. Less Wrongers can be a mixed bag.
But my point is that Yud is just an outlier. Most of the conclusions you can draw of Yud is not true of an average Rationalist or indeed a non-average leading one like Gwern or Scott.
Um, unlike EY, who's literally done nothing, Elon revolutionized EVs and space travel. You beclown yourself by pretending like he's the one who's a clown. Similarly, Scott's a blogger, routinely benefiting from MGM amnesia (when he writes about something you know, it's pretty clear Scott doesn't know much about it and it's just a commentator like Noonan, Krugman, Brooks, etc., with opinions generally not worthy of much deference). He's nowhere close to someone like Elon in impact, nowhere close in competency, even at chosen fields.
Meh, I don't subscribe much to the "Great Man" theory of history and technological progress. Some things are clearly wrought by great men, some physicists say that General Relativity is uniquely Einsteinian, but most aren't, and the few things that are tend to not matter much for the average person. Even if we grant the full premises of the Great Man theory of history, I believe there are so many humans in the modern age that Great Men are a dime a dozen, actually, and any combination of traits is out there somewhere, they are just starved of power/money/attention amidst all the hordes of other great men and ordinary men.
Even if I grant your premise that Elon is literally the George Washington of EVs and space travel, what does that have to do with the fact that he's an arrogant clown with bizarre actions ? I can hit wikipedia now and amuse you with the tales of any number of eccentric historical figures who were capable of great brilliance as easily as they are capable of immense stupidity. Maybe Elon revolutionized EVs and Space Travel, but the fact remains he's a crypto grifter and a stupid buyer of a social media corp that is now not worth 1/2 of what he paid. Two things can be true at the same time.
> He's [Scott] nowhere close to someone like Elon in impact, nowhere close in competency, even at chosen fields.
Impact can be argued for, even though it's a bit unfair to compare a fairly mature field like psychiatry to a nascent field with lots of low hanging fruits like commercial space travel. But how do you know competency ? Do you even have any benchmark for comparing 2 different sorts of competencies like Scott's and Elon's in an apples to apples fashion ? Do you know Scott's exact level of competency ?
I also hold a low opinion of EY, but he *is* the founder and original thought leader of the movement commonly called Rationalism, so it makes no sense to claim he is not representative. Maybe you could distinguish between different branches, like classical 2010 era Yudkowskianism rationalism and the more conservative and skeptical offshoot led by Scott Alexander, etc. But it seems like EY is still pretty popular on LW and in AI doomer circles, even if they don't necessarily agree with him 100%.
Thing is: an LLM (a LLM?) is an amalgam of human observations and legwork, so whereas we can choose between 'thinking' and observing/legwork, it seems an LLM doesn't do any 'thinking' which isn't data-rich (same could be true for humans - "nothing is in the intellect which has not been in the senses"). I suppose you're querying how a superintlligence might come up with original observations or experiments. But most original observations are surprising readings of existing data, and even with totally new experiences, perhaps a superintelligence is more likely to spot a black swan. I'm less clear how a superintelligence would organise an experiment.
I wouldn't describe LLM AIs as "thinking", not yet anyway. It's more like "pattern-matching". Is there a pattern in the organization of experiments, which is in the AI's training set? Then it's probably abstracted that pattern and can apply it to novel situations, but perhaps not as well as if it had been specifically trained to do that.
(Frankly, I think this is what humans do most of the time, too, and even a lot of what passes for us "thinking" is just us doing some pattern matching to what we think of as examples of thought.)
Fully agree - I tried to use scare quotes for 'thinking' but I guess that was ambiguous
> Most original observations are surprising readings of existing data
I am curious where this comes from or what you mean by this exactly? My prior would not be that most original observation can be thought of as new surprising readings of existing data. Rather I would think original observations most of the time are derived from new data or information that becomes available, or old data that is read in a new context. Maybe this is what you mean - but this doesn't support a LLM suddenly getting huge new insights from existing data. Don't get me wrong, I absolutely think a new tool such as LLM's can shed new light on old data (they already do, after all), but I also think there are limits to what can be derived.
When I say surprising, I mean to other people - the person doing the research isn't trying to be revolutionary or whatever, just looking at the data very carefully and seeing something in it no-one has before
I'm just a layman so probably not thinking about this in a thoroughly joined up way, but I'm thinking about the sequences in particular Einstein's 'arrogance' in dealing with a question about Eddington's astronomy experiment. EY's point was that Einstein had already seen enough evidence to believe his theory and didn't need the experiment to confirm. So Einstein looks at the data, 'finds' relativity and that's it as far as he's concerned - if this is a fair description of scientific discovery, you could imagine an AI doing something similar, therefore not as dependent on experiment (whether existing LLMs can do it I don’t know)
Just a layman (in physics) too. But I think it is a reasonable description to say that Einstein looked at already discovered data and found a new and surprising observation
(relativity) - I just don't think this is how most new discoveries are actually made. EY logic that a superintelligence would find a lot more connections like this may be resonabe, but it is also quite possible that if Einstein had not discovered relativity somebody else would have, maybe just a bit later. And therefore most such "discoveries" purely from data has already been thought of. My bet is that there are some useful discoveries from such available data - but that many other things will require experiments. Another way to look at it is that purely theoretical (e.g. mathematics) discoveries can mainly be infered from data or from thinking hard - while engineering, and using that theory to do useful things in the real world require experimentation.
So to still use the Einstein anology - superintelligence could discover relativity really fast - but could not develop nuclear weapons fast - because developing nuclear weapons probably would require a lot of experimentation and development that you can't simply think your way around without any feedback. It certainly works that way for human intelligence.
>superintelligence could discover relativity really fast
And therefore be able to appreciate the amount of energy potetial in fission from e=mc2? afaik that is not useful at all in developing an actual bomb. From a brief look at wikipedia, the discovering leading to atomic bombs were made by cemists - presumably by doing experiments - so there you go...
Also, it's not even about relativity at all: the relevant physics here is the Weak Nuclear Force.
Here's a real world example of AI being used to interpret existing data in a way which unlocks something previously unseen in the data. https://www.theguardian.com/science/2023/oct/12/researchers-use-ai-to-read-word-on-ancient-scroll-burned-by-vesuvius
I think the assumption that (I think) you implicitly make here is that all the necessary data that the AI needs to "take over the world" is readily available in a form the AI could use. I think this is false. Actual raw data from experiments are in fact often not published (at least in my field) and are often not readily available in a form the ai could use. The scientific papers describing experienmenta and data are what is available. Ofcourse a lot of information could be derived from that as well - but I think in practice experiments would be necessary also for a superintelligence to make such huge leaps as are suggested, e.g. inventing an army of nanobots.
Just to put my original post in context: I'm not an AI doomer but I think the weakness in the doomer position is probably an unforseen glass ceiling, either scarce resources or engineering constraints. But to the extent those constraints really can be overcome, it seems like it's all systems go for the superintelligence, and any dependence it has on data can soon be sorted out by gorging on all the data in the world, and if there are paywalls or security in place, it can learn to hack them. But I agree that is speculation which is why I'm not (yet) a doomer
I agree - I think there will be hard-to-workaround constraints for superintelligence, as there certainly is for humans. I'm an engineer, and I know how messy engineering can be - theory can only get you so far in my experience. My intuition is that a superintelligence - no matter how smart can't think it's way around everything. Even so it would certainly help to be really smart and to have the combined human knowledge available. Of course, I could be wrong, and we could be in huge trouble even if there are significant restraints.
Hi Leppi, there's a new open thread so probably time to wrap up, i justed wanted to say thanks for your posts and i may slightly downgrade my doomer position as a result, but it's already pretty low. I'll leave you with a flippant version of the ontological argument - if we can't conceive of a superintelligence that isn't dependent on experiments, perhaps we haven't yet conceived a true superintelligence. Not a great argument but it's all I've got!
Hi! Thanks for the discussion. I feel like pushing back a bit against what I perceive as hyperbole and figuratively extrapolating exponentials regarding AI (if that makes sense). Not saying that you represent that - but some people like EY do I think.
That being said, I think if we develop AGI and ASI it can for sure also be dangerous, and looking at AI risk is absolutely warranted.
It could also be argued that "rationalism vs empiricism" is one of those dumb historical philosophical conflicts which isn't really relevant today. Nobody really argues about it, they just learn about it in undergrad philosophy and have to pretend it's a sensible argument, but it's 2023 and we all fundamentally agree that the answer to whether knowledge should be obtained from reason or experiment is "Well yeah, both, obviously, depending".
Yeah, I think it's largely meaningless and irrelevant to the current naming debate, I just thought it was an interesting connection since people often criticize the name on this grounds.
"Rationalist" might be a bad name just because it sets some people off. I know a couple of otherwise intelligent people who apparently couldn't think about what rationalists might be like because they were stuck on the idea that people aren't rational, or at least not very rational.
And it sounds like you think you're smarter than everyone, which is a turnoff to lots of other people, at least in the USA.
One more thing about the intractability of the fight between Israel and Palestine: they aren't the only players. At a minimum, there's Iran supporting Palestinian aggression (possibly also Russia) and Americans millennialists who want to start the end of the world at Megiddo.
Does the latter group actually exist in significant numbers or are they a strawman based on one thing that some random loony said one time?
I'm not willing to go as far as supporting that people are hoping to end the world, but certainly some (the church I attend at least) believe that the existence of the Jewish people is important to the successful resolution of revalations, and that if they were wiped out, that could perhaps somehow cause issues. It's vague (because Revalations) but certainly they believe that helping the Jewish people maintain control of Israel is important for god's plan's successful conclusion.
So far as I know, they contribute a good bit of money to Israel and are influential on the US government, but I await further information.
"Trust me bro they're real" isn't convincing. I want to know which sects believe this and how many members they have.
https://www.facebook.com/nancy.lebovitz/posts/pfbid0uDYjKqLDU9tTZfLziQJr5NYGQKJ1ibNxgECPvfECMCpbAQdteAMauAzdKbLhUqy2l
I asked my facebook readers about this, and you can see a bunch of answers. Here's one of the better ones:
"John Hagee’s Christian Zionist organization, Christians United for Israel, has over 10 million members, which means that one Christian Zionist organization alone, not counting any other Christian Zionist orgs, has more members than there are Jews in the US (about 6 million, according to the Pew Research Center, and not all Jews are Zionists).
Academic Tristan Sturm estimates the number of Christian Zionists in the US at around 30 million — almost 10% of the total US population, and twice the worldwide Jewish population. That’s a large enough faction to influence US politics, and the US is a major contributor to Israel’s military efforts."
The question wasn't how many Christians support Israel, it's how many Christians support Israel because they want the world to end and think that Israeli control of Israel is somehow required for that.
Nancy may exaggerate (unintentionally, of course) the ubiquity of those specific views, cf. https://forward.com/opinion/431077/the-idea-that-christian-love-for-jews-is-about-rapture-is-a-paranoid/, but she nevertheless seems correct that Christian support for Jews / Israel / Zionism is very significant. This poll: https://www.pewresearch.org/short-reads/2014/02/27/strong-support-for-israel-in-u-s-cuts-across-religious-lines/ (from a few years ago, but I didn't immediately find anything newer) found that 31% of Jews thought the US wasn't supportive enough of Israel, compared to 33% of Protestants, 46% of White Evangelicals, and 29% of Christians in general. With over 30 times as many Christians as Jews in the US, Christian support for Israel is highly significant.
As far as donations, this article: https://religionnews.com/2023/05/01/how-much-do-us-jews-and-christians-donate-to-israel/ states:
>A study of evangelical Christian giving to Israeli nonprofits covering a longer time period – from 2008 through 2016 – identified 11 organizations donating an estimated total of $50 million to $65 million over the entire period...While this is less than 3% of all of the funds Israeli nonprofits obtained in foreign donations, we believe it’s worth watching this trend in part because the amounts grew in the period we reviewed.
3% of foreign donations from Evangelical Christians in particular (and probably more from other Christian denominations) is probably significant amount in absolute terms, but not the most significant in relative terms.
I think their greater impact is through weight in American electorate, see: https://www.richardhanania.com/p/stop-overrating-the-discourse.
Regarding the extent to which Christian support is driven by those particular doctrines, rather than merely coexisting with it, see also this anecdotal comment: https://www.richardhanania.com/p/stop-overrating-the-discourse/comment/43311125.
Thank you.
Always my pleasure!
Stop conflating support of Israel with Zionism.
But Israel has every right to occupy Palestine, or any of their other neighbours who have attacked them. Just as the Allies had the right to occupy Germany and Japan after WW2 -- if you start a war with someone then they have every right to defeat you and occupy you. We didn't just beat the Nazis back within German borders and try to coexist with them, we wiped them the fuck out.
This occupation needs to last as long as it takes for the ideology which refuses to live in peace with its neighbours has been eradicated. In Germany, we de-Nazified the place and withdrew within four years and it worked out quite well. In Palestine it apparently hasn't worked so well; every time the Israelis withdraw they get attacked again. I don't know what the equivalent of deNazification in Palestine might look like, but it certainly hasn't happened yet.
Well, I guess that you got that war that you wanted. Sorry if it isn't going as you hoped.
Only resolutions by the UN Security Council are "binding", not resolutions (reccomendations) by the general assembly.
Besides, under Geneva convention which protects civilians and noncombatants Hamas murdering and abduction of Israeli citizens at october 7th plus the indiscriminate rocket attacks are super-illegal and immoral in the first place!
To only hold Israel accountable, but state state that hamas can murder children and take old grannys hostage is without consistency and motivated reasoning by you.
Just to be clear what is the territory occupied? Anything after the 1967 war? After the 1948 war? The formation of Israel itself?
On nuclear regulation:
There's an article that is making the rounds of the rat blogosphere that I think is seriously wrong. You've probably seen it quoted. It blames the ALARA (as low as reasonably achievable) radiation protection standard for all the economic problems of US nuclear power. From https://worksinprogress.co/issue/taming-the-stars/:
"ALARA is defined as: "making every reasonable effort to maintain exposures to radiation as far below the dose limits in this part as is practical consistent with the purpose for which the licensed activity is undertaken, taking into account the state of technology, the economics of improvements in relation to state of technology, the economics of improvements in relation to benefits to the public health and safety, and other societal and socioeconomic considerations, and in relation to utilization of nuclear energy and licensed materials in the public interest." [footnote citing 10 CFR 20.1003]
As currently applied to nuclear power, ALARA literally means that every expense must be spent on eliminating every possible effect of nuclear power, at least until the resulting electricity is no cheaper than what the market pays for electricity generated from non-nuclear sources. Since standards cannot ratchet downwards, only up, safety standards that are just about affordable at the top of energy price spikes get entrenched, meaning that nuclear is made unaffordable until the next price hike – which makes it even more expensive, since it prevents learning and the economies of scale that a steady pipeline of projects can allow. ALARA, as currently applied in the US and much of the rest of the developed world, means that nuclear power is never allowed to be cheaper, no matter how much safer and cleaner it is than other sources of energy. It makes affordable, safe nuclear energy impossible, and forces us to rely on much less safe energy sources instead." End quote.
The first paragraph is a literal quote from the regulations. Everything after that? where the author tells you what ALARA "literally means"? is wrong. At least, I think so. To the extent I understand the claim being made here.
Is the author saying that nuclear regulations actually change in response to energy prices? This absolutely does not happen. Is he saying that inspection standards or radiation protection procedures change with energy prices? So that some regulator or energy company employee is actually making the decision to increase radiation protection standards when they observe nuclear becoming cheaper compared to non-nuclear energy? Highly implausible. Energy prices change all the time, and regulations/inspection procedures/radiation protection procedures are only changed in a slow and cumbersome way. Also, industry would have no incentive to make itself less competitive, and it is very much NRC culture to NOT pay attention to energy prices.*
Okay, maybe the author is making a more general claim that the level of safety/security regulation increases over time, it's a one-way ratchet and regulation prevents nuclear power from being as cheap as it arguably should be compared to other energy sources. A fair but unoriginal claim. But then why the talk about ALARA?
First of all, understand that ALARA is about radiation protection. It is not the be-all and end-all of nuclear regulation. The ALARA standard adds on to other radiation dose regulations. For example, a typical nuclear power plant worker can get a max of 5 rem per year of occupational radiation exposure (10 CFR 20.1201) AND their radiation dose must be ALARA. So if a worker gets more than 5 rem, it's a violation of both regulations. If a worker gets less than 5 rem but the plant does not make reasonable effort to make the dose ALARA, it could be a violation of the ALARA standard. Conclusion...even if the ALARA standard didn't exist, nuclear plants would have to put significant effort into radiation protection, albeit not quite so much.
I'm not gonna say ALARA is unimportant. But it's only one of a whole host of regulations that apply to nuclear power plant design, construction, operation, and decommissioning. There are regulations that apply to nuclear security, reducing and mitigating the risk of nuclear accidents, emergency planning, environmental protection, and I could go on. There would be a significant regulatory burden even without ALARA.
Maybe the author is using ALARA as shorthand for the entire group of US regulations and laws relevant to nuclear? Or the entire regulatory mindset? But, if your argument is that nuclear regulation should incorporate cost considerations, why pick on one of the regulations that explicitly incorporates consideration of cost, instead of the many that don't consider cost at all?
Another quote: "[T]he components that are not safety critical are still subject to a gold plated ALARA standard. This means the same component is regulated differently depending on whether it is in a coal plant or a nuclear plant, even if it is far away from the reactor and cannot affect it."
False. The reason that a component in a nuclear plant is regulated differently from a component in a coal plant is that different laws, regulations, and administrative agencies regulate nuclear plants from those that regulate coal plants. ALARA has absolutely nothing to do with that.
I hate to be all argument from authority, but I notice the author, John Myers, seems to be a UK YIMBY activist and if he has any experience in US nuclear, I'm not aware of it. Please understand that the statement "ALARA, as currently applied in the US and much of the rest of the developed world, means that nuclear power is never allowed to be cheaper, no matter how much safer and cleaner it is than other sources of energy" is false. That is not what ALARA means. ALARA is not that powerful. Please stop quoting this guy uncritically.
*Because NRC's mission is to ensure nuclear safety and security, not to ensure that the US nuclear power industry is economically viable. If you want something to complain about, ask Congress to change that.
One of the biggest problems with this type of argument is that nuclear power is uneconomical everywhere, not just the US. Finding out about Flamanville 3 was a major update for me, especially since the nuclear fanboys often point to France as the place that got things right.
The reason they think there's a relationship between energy price and regulation amount is the word "reasonable", because they read "unreasonable" as mostly synonymous with "too expensive".
So, as safety features get cheaper or easier to implement, what constitutes "reasonable"grows more expansive.
For example, it may start out unreasonable to require everyone to wear hazmat suits all the time and also maintain a nuclear plant. But say that after 50 million dollars of work designing more and more passive safety features passive safety features instead of saying "we have met a reasonable standard" and stopping, someone goes "but wait! We could get even safer by forcing everyone to wear hazmat suits! That's just not as much of an imposition anymore and it only costs 10 million, which is still 40 million below the previous reasonable threshold!". This process repeats until the energy price of nuclear goes up, at which point someone can point out that safety beyond that margin is unreasonable.
I have some ideas about what Israel should be doing. I'm not sure whether I'm right, nor whether this is psychologically possible even if it is right.
I think Israel should be looking to its own borders and its own safety. Wrecking Gaza will not necessarily make Israel safer, and may be putting it at more risk. It's certainly creating more hatred for Israel and I gather there are Hamas leaders in other countries-- they aren't at personal risk from the attack on Gaza..
10/7 wasn't just an atrocity, it was an embarrassment. I assume the borders are getting more attention, but are they getting more thoughtful use of tech? Bulldozer-proof barriers?
Destroying Hamas' tunnels has some practical and humanitarian issues, but additionally, the attack was by air and sea as well as by underground.
As I understand it (discussion is welcome), Hamas' intent was to provoke Israel into a drastic reaction so the world would stop supporting Israel (maybe also to make it more likely for Moslem countries to attack Israel), so that Israel could be destroyed. It's a vile approach, but it might actually make some practical sense. I doubt that Israel will be destroyed, but I still think it would be bad if it were on the receiving end of a big attack.
A part which might not be psychologically possible is to quit abusing Palestinians. Torture and a lot of imprisonment might, oh maybe just might, have something to with why it was possible to keep such such tight security on the 10/7 attack. I'm not sure how many people were involved, but I'm expecting low thousands.
Maybe they *were* warned. I get the impression the Israeli government didn't want to believe such an attack was possible.
Meanwhile, Israeli military capacity is being spent on wrecking Gaza, and perhaps the most valuable thing being wasted is attention.
Just by the way, Netanyahu is staying in power while the attack on Gaza is going on. I'm not sure when the next possibility for getting him out of office is, though I'm betting he will be out. In any case, His incentives to continue the attack are personal as well as emotional.
Sidetrack: it may not be possible to get all the hostages back. I wouldn't be surprised if some of them are dead, and I've heard a plausible claim that some of them are being held by groups other than Hamas.
I've thought from the start that this is a crafty land grab, by Israel, and nothing I've seen since has changed that view, if anything the opposite. Urging Palestinians to move to south Gaza, and then shutting down all utilities in northern Gaza and trashing it ever since, to encourage the Pallies on their way, and then the IDF promptly occupying most of it, is all a bit of a giveaway that their aim is to annex at least Northern Gaza if not the whole lot.
The Israelis must have known about the Hamas plan in advance. For example, it was reported (although how reliably I don't know) that Egypt warned them about it some days previously. So, by not taking more precautions in anticipation of it, one can only assume the Israelis were willing to let it proceed to its full extent, so they would gain the sympathy and support necessary to invade Gaza in their turn.
Obviously it's unfortunate for the Israeli hostages, and innocent Palestinians come to that, but if the above supposition is true then the policy is evidently that regrettably they are expendible for a greater long term benefit to Israel. The Israelis may even be able to get most hostages back, as well as keeping northern Gaza, a double win!
Note that I am not criticizing Israel. Netanyahu seems like a true stateman, willing to make strategic decisions at the risk of his own popularity. In any case, Hamas itself brought all this on the Palestinians. Also, a big punch up was inevitable sooner or later anyway, due to the Palestinian population in Gaza increasing so rapidly.
Isreal has occupied Gaza before and could do so again any time they wanted to. This sort of 5d casus belli makes no sense in reality, even for say Pearl Harbor, let alone here..
Israel unilaterally abandoned Gaza in 2005, forcibly evacuating the whole Jewish population. They haven't occupied Gaza in the many wars that Hamas started by shooting rockets at Israel. Israel has offered Gaza to Egypt, but Egypt didn't want it. Why would Israel, or anyone in their right mind, want Gaza? And why would any Israeli officials want to go down in history as an epic failure by getting their acquaintances or relatives killed (*everyone* in Israel knows someone who died in the attack) just so they can get a small piece of land with no resources that's at best full of rubble, at worst full of Palestinians who want to murder them? You are suggesting a conspiracy theory that not only paints the Israeli government as cartoonishly evil--which is already a red flag--but as wildly irrational at the same time.
2005? That was nearly twenty years ago. As I mentioned, the Palestinian population in Gaza has been ballooning in recent years, and by now has probably almost doubled since then.
Regardless of what seemed the best option in 2005, a rapid, and likely continuing, exponential increase like that, on what you yourself call a "small piece of land with no resources" mandates an urgent change of policy before the rest of Israel is threatened to an existential extent.
Yes, there are roughly twice as many Palestinians in Gaza now as there were in 2005. That makes Gaza twice as unappealing a place for Israel to have anything to do with now than it was in 2005, and they went to a great deal of trouble to pull out of Gaza then.
Israel does not want Gaza. If it didn't have Palestinians all over it but were in its pristine natural form, sure, it would be worth something. But you could turn a Gaza-sized strip of the Negev Desert into a decent place to live easier than you could turn Gaza as it presently is into a decent place for Jews to live. Israel doesn't want it.
They might be stuck with it, though, because nobody else save Hamas seems to want it either.
John I don't know if you're based in the US, where in most areas people can be relaxed and choosy about land, because there is so much of it. But in a small country like Israel you can never have too much land, and every scrap is valuable, even if it is a barren dusty wasteland. With know-how and commitment it might not stay that way.
Coastal land is even more potentially valuable, for example as holiday resorts, with their tourist dollars, or desalination plants, or nuclear power stations with a handy and ample supply of cooling water.
Also, land isn't just about places to live or grow crops. Land is a military asset, and the more "hinterland" you have, even if uninhabited desert, the more time and elbow room there is to counter incursions. For example, that's why most ancient cities were founded a few miles up-river from the sea, to give some advance warning of sea-borne invasions and time to prepare!
If Gaza were a barren dusty wasteland, sure, Israel could do something with it and would probably try.
Gaza isn't a barren dusty wasteland, it's a war-torn city with a couple of million Palestinians all over it. The couple million Palestinians are a huge *negative* to Israel, one that far outweighs the value of a few hundred square kilometers of barren dusty wasteland and/or ruined city. And Israel is not going to ethnically cleanse Gaza of all those Palestinians, no matter what some people here like to claim. So, owning or occupying or administering Gaza is a negative for Israel.
Of course, living next to Hamas is *also* a negative for Israel, and 10/7 changed the calculus on which is the lesser evil. So I expect we will see Gaza under Israeli rule for the next few years. But as an instrumental goal, not a terminal one.
And Israel has been able to obtain that, if it wanted, since 1948 by killing or deporting all the people. It has never acted on its supposed desire. Even in the recent war, Israel is not expelling Palestinians from land it conquers. In what way is it a serious desire if Israel never acts on it?
Wow. That's definitely picking the situation up by the other corner.
I'm more inclined to believe in stupidity on the Israeli side rather than plotting, but I don't know what can be proven. The version I'm familiar with is that Egypt did warn them, but the Israeli government wanted to believe Palestinians had been mollified with jobs, and there hadn't been an attack for a while.
Would Israel want a land grab of utter wreckage in Palestine, possibly with extra attacks and terrorism? I don't think so, but it's hard to tell.
It's true that what's being done to drive people out of northern Gaza when they have no refuge anywhere is a disgrace, but was it intended from the start? What could be used for evidence?
It's not just unfortunate for the hostages, even if you have no sympathy for Palestinians. There are the 1200 dead and their families and friends, at least.
I was concerned about appearing sociopathic with my rather chilly analysis, but I should have remembered this is ACX.
Yea. This topic honestly is making me reconsider participation here.
I've actually recommended this board to family members as a place to find rational online discussion, sheesh.
I don't know why Scott hasn't at least banned NS.
I am happy that people are at least not rising to the bait and mostly ignore his comments.
We're easily bored.
Your desperate desire to believe the propaganda of one side is just slightly less annoying than your desperate desire to convince everyone else of it.
What do you possibly know about the provenance of that video, or the claims made in the tweet? Nothing, and nor do I.
What kind of deal would Hamas agree to? Its current charter is pretty clear that Hamas only views the 1967 borders as a starting point for a unified Palestine from the river to the sea. Its earlier charter was even more explicit.
Likewise, the remaining Arab states didn't signal their willingness to negotiate when they issued the Khartoum Resolution (also called the Three Nos: No peace, no negotiations, no recognition of Israel.)
If you really think that the Israelis are the impediment to peace, should they embrace the Khartoum Resolution and the Three Nos? Would that be a step in the correct direction?
Sure, so Israel should go back to its 1967 borders. Like how it was in 1966. Did Israel have peace with its neighbors in 1966?
Did they ever offer to make peace before 1967?
Because that would undermine the idea that Israel’s expansion is the impediment to peace. Hamas’ charter makes clear that Palestine has to go from the river to the sea - any Israeli border is an impediment to “peace” - with “peace” here meaning capitulation and an unconditional Arab victory.
Israel indeed opposes such a peace, just as Palestine doesn’t seem keen on unconditional surrender either.
So I’m interested in what you think the Arab states are willing to give up as part of a peace deal. The Khartoum Resolution doesn’t provide much to go off of, does it?
Likud’s charter came out in 1977. Hamas’s charter in 1988.
So we still have the period from 1948 to 1967 to account for. Why didn’t the Arabs make peace then?
Who got skin color into this ?
If, as I believe, it was Hamas' intent to cause an Israeli overreaction which would lead to the destruction of Israel, this is remarkably depraved.
However, Israeli failure to be prepared strikes me as an embarrassment.
There is no reason to think the Israeli government wanted an attack on the scale of 10/7. As for how much they wanted to attack Gaza, it's hard to tell. They'd been going along for years without comprehensively bombarding Gaza, so maybe they didn't really want to.
It's all very well to talk about revealed preference, but you also need to estimate what hints about what people want might be relevant.
Considering the amount of backlash they created, maybe it wasn't a *well*-calculated loss.
At the bare minimum, the support they have among the USA's youth seem to be in a downward spiral. Those are the future congressmen and congresswomen that in 40 or so years they would have to bribe to get their yearly X billions in aid. That's not to mention Europe, which is geographically closer and influential with the US.
And for what, exactly ? What has the IDF concretely accomplished other than 12K dead, 1 million+ displaced, and a Northern Gaza full of rubble, destroyed armor, and Hamas ? Not to mention the economic havoc of $260 million down the drain a day and 350K Israeli diasporing outside Israel.
No long-term investment can be judged in 2 months, but Genocide in front of the camera looks bad for business.
>why would you stop them?<
...because they're doing it to you?
>Doing what to Israel?<
Well let's check your first comment... where was it... ah yes.
>commit an atrocity so horrible that it could justify even genocide in retaliation<
That's not something that just goes away afterward, that's a scar in the public conscience for decades. That's a wound that keeps bleeding.
As is an invasion. It's been less than two months since this attack and people are already clamoring for Israel to calm down. How long were people complaining about Iraq and Afghanistan?
I'll let Bobby Bare explain the concept of winning a fight like this. https://www.youtube.com/watch?v=Yv_fuejbELc
Just clicked through the "implicit association test" Scott referenced in his "Quests and Requests" post, and got a strong perception that I would get about the same bias given black/white colored squares instead of dark/light skinned people. I think in my mind, negative emotions are in some part defined as negations of positive emotions, and dark skin - as a negation of light skin. So it's natural that it's easier to hold positive<->positive association versus positive<->negative.
It's also a bias of sorts, but not _that_ kind of bias Scott was hinting at, it seems
In my opinion, the implicit association tests don't show racism at all. The just show associations. This is why black people also, on average, register as "racist" against black people on the tests. You could probably make an implicit association test show that people associate white people or soldiers with Nazis more than they associate other races or professions with Nazis. This doesn't mean people are racist against white people.
Racism may imply you have certain associations, but the reverse is not true.
What are examples of times and places when political change has been both fast and good? (Good in your opinion; fast with respect to typical political change throughout history.) Change directly related to the end of long wars, independence wars and the fall of the USSR don't count.
To be clear: it can be after a (non-independence) revolution, but not a time when things are much better simply because a time of peace has followed a time of war.
The Schleswig-Holstein question had a nice resolution. The lands had bounced back and forth between Denmark and the HRE/Prussia/Austria/Germany for centuries, and was thought of as an insoluble problem. When Germany lost WWI, the Allies let Denmark decide what to do, and to their credit they held a plebiscite. Problem solved. Not even Hitler changed the border.
https://en.wikipedia.org/wiki/Schleswig%E2%80%93Holstein_question
Lord Palmerston: "Only three people have ever really understood the Schleswig-Holstein business: the Prince Consort, who is dead, a German professor, who has gone mad, and I, who have forgotten all about it."
Another instance of a border question included for comic relief:
"In 1984, Canadian soldiers visited the island and planted a Canadian flag, also leaving a bottle of Canadian whisky.[9] The Danish Minister of Greenland Affairs came to the island himself later the same year with the Danish flag, a bottle of Schnapps, and a letter stating "Welcome to the Danish Island" (Velkommen til den danske ø).[10][11][12] The two countries proceeded to take turns planting their flags on the island and exchanging alcoholic beverages. There have also been Google ads used to "promote their claims""
https://en.wikipedia.org/wiki/Whisky_War
What Deng Xiaoping did for China has got to come in number 1?
Before or after the Tiananmen Square Massacre?
Both before and after. TSM was less than a rounding error relative to what Deng achieved.
I'd nominate the split of Czechoslovakia into Czechia and Slovakia. It was negotiated and carried out entirely within a single calendar year; it was entirely peaceful; and it created two stable and culturally-coherent democracies. How many peaceful national divorces have ever even been attempted let alone quickly accomplished?
A quick google search tells me that Czechs are about 1% of the population in Slovakia and Slovaks are about 2% in the Czech Republic. Were the populations intermingled before the split or were the borders easy to draw according to the demographic distribution? If the latter, the ease of creating commonsense homogeneous nation-states might explain the relative painlessness of the divorce settlement.
AFAIK, Czechia is basically the old Holy Roman Empire regions of Bohemia and Moravia, while Slovakia was a Slavic country ruled by Hungary, the Ottomans, and the Austro-Hungarians, with maybe Poland in there somewhere for good measure. So I've had the impression that they were fairly distinct, like Austria-Hungary was.
Hazy recollection knee-jerk response:
The post-WWII treatment of the Axis powers by the Allies (including the Marshall plan) seems to fit. At least in the sense of "good" centered on none of Germany, Japan, or Italy invading anyone since then (that I recall). Corrections welcome! (yeah, it is a change related to the end of a long war, but it isn't _just_ the end of WWII. THe peace afterwards was managed much better than the aftermath of many (probably _most_) wars.)
It seems off to me to separate the postwar settlement from the war itself in answering this question. I don't think the specifics of those postwar rebuilding and rehabilitation programs could have been happened without their complete defeat in war. War is politics by other means and all that.
In fact, the defeat Japan was somewhat less complete than Germany, which may have been expedient but affected its "spiritual" rehabilitation, for the lack of a better word, and this has had lasting consequences with respect to its relations with its neighbours.
That's fair. I consider it useful to consider the postwar settlement special, mostly because it was remarkably successful in comparison to many other postwar settlements - even in cases where the end of the war appeared to be equally decisive.
Many Thanks! Interesting! So there was an analog to project Paperclip internal to West Germany?
"But it's also empirically true that there's no evidence they were plotting a nazi coup or future wars, because the military balance of power and overall economic conditions had changed so much that they genuinely gravitated towards a mostly democratic ideology within the framework of being a US client state."
Compared to most outcomes of most wars, I'd count that as a success.
Many Thanks! It is amazing that the outcome was as favorable as it wound up being, particularly since the forces driving the loosening of the process were schedule pressure and manpower limits rather than any careful calculation. That Germany wound up neither as permanently resentful as after WWI nor reverting to Nazi rule seems like amazingly good luck. This makes it clearer why so many other postwar outcomes were so dismal.
I think you are probably grinding an axe here, but I honestly can't tell which one.
The years of occupation were certainly part of what the Allies did, and I assume that they were part of why the Axis powers were turned into nations that everyone could live with. In that sense, it worked, while similar attempts by the USA more recently have failed e.g. in Afghanistan.
I'm guessing that "re-education of those savage barbarians" is sarcastic, but I don't know what specific axe you are grinding here.
Is something false? What, specifically?
Is there something you don't like? What, specifically? And what would you have preferred as an alternative?
I don't agree that the Germans are uncivilized. What has been problematic in the past is that they tend to be more earnest and enthusiastic than most, in that having decided to do something, they go at it hammer and tongs and sometimes don't know when to stop! Of course that need not be a Bad Thing, and is usually quite the opposite.
If you or I were preparing an encyclopedia of chemistry, for example, we would probably be content with ten volumes. But a German professor wouldn't be satisfied with less than twenty. Actually, I think there is some scientific encyclopedia with seventy or more volumes, and the editors are inevitably - you guessed it - German! :-)
If we were drinking in a bar one evening, we'd probably have had enough after five pints. But a German drinking party would drink ten pints, then at 2am tickle their throats to honk up, after which they could start on another ten.
Ok. Many Thanks!
Yeah, a real blast from the past.
The transition in Spain comes to mind, which took place after the death of the dictator Franco and led to the establishment of a democracy within two years and a peaceful handover of power a few years after. It is probably most remarkable in that Franco appointed Juan Carlos as his successor and all signs pointed to a continuation of dictatorship. Instead he rapidly instituted a democracy and willingly gave up his powers.
Last year I read a book with a lot of stuff in it about Churchill during the war. There were accounts of him wandering around his residence in the night wearing outrageous get-ups. I have forgotten the details -- but some were women's clothiing, like maybe a lacy negligee, and some just absurd, like maybe. a clown suit. Also accounts of his champagne dinners attended by his staff, visiting dignitaries, etc. Churchill would sometimes lead the group in skipping in circles around the table, I believe with music playing.
Others who have read about these things -- how do you think of them? I know he was an alcoholic -- I know he was not crazy. Why did he do those things? Was there more tolerance then for eccentricities of this kind? Was it a way of demonstrating his self-confidence? -- like that he was so sure that he was admired and respected that he felt able to indulge his weirdest whiims in public? Was it a way of making fools of his dinner guests?
I remember reading an article that claimed Churchill deliberately cultivated a reputation for carefully-chosen socially acceptable vices because he felt it made other politicians more comfortable dealing with him. The focus of the article was on his reputation for heavy drinking, but eccentricities of private dress and behavior seems like it might be more of the same.
About his drinking in particular, the article talked about (citing statements by one of his daughters about him) that during his time as a cavalry officer in India, he got in the habit of drinking what his daughter called "Papa Cocktails" in the mornings, consisting of a big glass of water with a small splash of whiskey for flavor, which he'd nurse for several hours. So other people were seeing him drinking giant whiskey cocktails in the morning and assume he was consuming a lot more booze than the half a shot or so that was actually in the drink. And since heavy drinking was then considered a relatively harmless vice, he considered it useful to encourage the perception rather than correctly it.
By 'comfortable', do you (or the article) mean that the other politicians would have been less comfortable attempting horse-trading with people whom they perceived as puritanical? Or is this 'comfortable' in the more basic sense of, 'I feel like I can be myself around that old boozer?'
I'm not entirely sure, as it's been a while since I read it. But of those two, I think it was more of the latter. There was also an element that Churchill was obviously extremely talented and rose very high very quickly relatively early in his political career, becoming a cabinet minister at the age of 34 (his immediate predecessor and successor in that position, President of the Board of Trade, were 11 and 21 years older than him respectively) and being transferred to one of the most senior cabinet posts (Home Secretary) a couple years later, so having some visible flaws made him seem less threatening to his more senior coworkers.
If Supply & Demand is a thing, why does Black Friday exist?
If Black Friday is driven by a spike in demand, then I'd expect prices to grow rather than shrink. If Black Friday is driven by the supply side, wouldn't concentrating the costs of logistics/production into a single month make less money than smooth, continuous operations over the course of the year?
The common wisdom I've always received was: suppliers compete on price for business. But this just doesn't add up, to me.
I think the other answers miss price discrimination. Firms can make more money if they can sell goods for less to people who care about price and who therefore are willing to shop early, and those who do not care about price, or who are disorganized, and who are willing to pay more right before Christmas. If you put this on a supply and demand curve, it allows the stores to effectively create two supply curves to capture different parts of the demand curve. It's the same logic by which sales and coupons work in general.
Supply and demand explains how prices are set. It doesn't explain how demand or supply are generated. The reason prices don't rise on Black Friday is because while demand spikes it spikes predictably so businesses increase supply. Since there is an increase in supply and demand simultaneously the price does not change. Unless you're referring to why there are sales which is a different behavior but more related to competition and returns to scale.
Price does not change? What? Isn't the whole point of Black Friday that the price changes?
No? The point of Black Friday is that it's the first day of the Christmas season so a lot of people go shopping. A lot of stores offer discounts to try and attract this business. But not all do so it's certainly not the whole point.
Shamus Young blamed consumer psychology:
https://www.shamusyoung.com/twentysidedtale/?p=29624
>wouldn't concentrating the costs of logistics/production into a single month make less money than smooth, continuous operations over the course of the year?<
I recently found out our store has to order its Halloween items in February. Smooth and continuous is a pipe dream.
I think if you wanted to properly model firm behaviour, you'd have to incorporate some Game Theory. However, rather than think about Black Friday as an outward shift in demand, think about it more as a temporary increase in the price elasticity of demand by consumers. Consumers aren't just looking to buy, they're specifically looking to buy great deals on Black Friday. They're also looking to buy for the Holiday season, so the demand of consumers will contract once their holiday shopping is over. Firms make pricing decisions based on both current demand and expected future demand, so even if demand shifts to the right on BF its not clear that the prices increase as a result. A lot of firms that offer discounts over BF don't compete in perfectly competitive markets
There's a two-way relationship between capitalist production realities and consumerist group-think:
1. Invent a shopping holiday on the basis of available statistical information and market the bejeezus out of it on the back (eg. front end) of the Christmas advertising push.
2. The idea is even more successful than ever imagined, becoming enshrined in public consciousness as an unofficial national holiday, *specifically* for the overburdened working class who likely won't have much time for Christmas shopping over the next month and whose mass media overexposure means they're more susceptible to broadly slathered marketing dollars spinning up FOMO anxiety.
3. Face the new reality: fake holiday has concentrated quarterly consumer purchasing into one catastrophic annual sales event. Now, even if it would be more cost-effective to spread out your operation, you've conditioned customers to 'wait for the sales'. Buckle up.
4. Do what you can to maximize margins inside the new status quo. Prices don't drop as much as advertisers would like you to think, and when they do it's a way to dump inventory before next year's re-up.
Retail businesses compete on profits and customers *over* competing on price. A low price is simply one of several ways to increase those first two metrics.
"If Black Friday is driven by a spike in demand, then I'd expect prices to grow rather than shrink."
Retailers and manufacturers know black Friday will happen, so they plan for supply to increase to meet the demand in advance.
One study found that only 2% were not available at the same price or cheaper within six months either side of the date. That's just one study, but it would make sense given TANSTAAFL.
I thought that a lot of stuff would show cheaper on ebay as people realized they'd impulsively bought things they didn't want, but apparently that doesn't happen.
In the mciroeconomic sense, 'supply' is 'the amount of a good a seller is willing to sell *at a given price*' and 'demand' is 'the amount of a good a buyer is willing to purchase *at a given price*'. Black Friday, like all limited time promotions, exist because there are buyers willing to buy most of what they want when they want at the 'normal' price, and other buyers who are only willing to buy at the offer price. By having time- (and often stock-)limited promotions, retailers reap the available profit from both, at the comparatively low cost of making a few 'coincidental' sales at the low price to people who would have bought high anyway.
Price discrimination would be my best explanation. Why do groceries have discounts on Tuesday or some other inconvenient day? Because that way they get to sell at slightly higher prices the rest of the week to price-insensitive customers, and still get to sell to the price-sensitive ones (who are willing to make an effort/deal with inconvenience in order to get a rebate). Same logic drives coupons, etc.
The Black Friday marketing ploy is "come stand in line starting at 5 am and you might get a cheaper TV than normal (limit 1 per family, while supplies last)". It's a great way to get some extra sales from price-sensitive customers without the to overall revenue that would come from just having lower prices in a normal way.
https://going-medieval.com/2023/11/17/no-the-church-did-not-kill-joan-of-arc-you-credulous-dullards/
Gets into detail about Joan of Arc's trial being by secular authorities and lacking many guardrails that the Catholic Church required for heresy trials.
On the one hand, the Catholic Church wouldn't have had her killed, and I'm not sure it would have put her on trial for heresy at all. On the other hand, it's the Church that made heresy trials a serious matter, so I think it deserves some of the blame, though rather indirectly.
A spectacular essay about Joan of Arc, patron saint of Catholics who don't fit well in the Catholic Church, at least on the left side. It actually gave me a feeling of what's it's like to want a patron saint.
http://tigerbeatdown.com/2011/01/09/running-toward-the-gunshots-a-few-words-about-joan-of-ar/
A spectacular essay about Joan of Arc, patron saint of Catholics who don't fit well in the Catholic Church, at least on the left side. It actually gave me a feeling of what's it's like to want a patron saint.
Reading those posts followed by the Wikipedia article on the Siege of Orleans was rather jarring. The tigerbeatdown post made it sound like Joan was a skilled military leader while the Wikipedia article repeatedly lists Joan urging foolish military attacks only to be overruled by the people who knew better, and her only actual contribution to lifting the siege was a giant morale boost.
Hmm I listened to the four part series about Joan of Arc on the history on fire podcast. The story Daniele tells in not the same as the above article. (Which sounds a bit... crumudgeony.) If you can get past his thick Italian accent, I found it worth listening to.
Which of the two articles?
Could you say a little about the differences?
The first one, I didn't read the second. It's been a while since I listened to the podcast. I guess most of the facts are not that much in doubt, what is not known is the motivation of the people involved. OK the second sounds closer to the History on Fire podcast. There's been a ton written about Joan of Arc, and finding the truth amongst all those words is perhaps impossible, so people kinda make up the truth they want.
Seemingly there's now "Joan of Arc was trans" out there, but I don't know how much traction it has or if it's just a publicity stunt like a provincial English museum declaring Heliogabalus was trans:
https://www.telegraph.co.uk/news/2023/11/20/trans-roman-emperor-hitchin-museum-claim-pronouns-woke/
At this stage I'm not even rolling my eyes anymore, just yawning and going "So?" because it's not even worth the energy to fight over this nonsense.
"Elagabalus was trans" has been a thing for some time, and it's a fair conclusion if we take statements about Elagabalus by the Roman historian Cassius Dio at face value. Specifically, Cassius Dio describes Elagabalus as insisting on being addressed as "lady", as referring to him/herself as the mistress, wife, or queen of a male court favorite named Hierocles, and as trying to solicit surgeons to give him/her female genitalia.
Cassius Dio was a contemporary of Elagabalus, and was a high-level politician so he had access to quite a bit of good info about Elagabalus, but he was out of favor and mostly well away from the capital during Elagabalus's reign so he was relying mostly on second- and third-hand accounts rather than personal observation. Cassius Dio was also aligned with Elagabalus's political opponents and was restored to favor and high office after Elagabalus was assassinated and succeeded by Severus Alexander.
In light of this context, it's also defensible to conclude that Cassius Dio's characterization of Elagabalus was malicious gossip at best and consciously-perpetrated political libel at worst. Accusations of unmanliness were a common genre of Roman political insults, and Elagabalus was an easy target for such even if they were groundless for reasons of personal appearance (he was young, slight of build, and looks rather effeminate in contemporary depictions) and ethnicity (he was Syrian rather than Italian or Greek, and Syrians apparently were stereotyped by Romans as being effete and effeminate).
This is a persistent problem with pre-modern historiography: an awful lot of important stuff is sparsely documented, so we often have to rely on our choice of embellished narratives written a century or two after the fact (and filling in gaps in their own sources with supposition and guesswork) or one or two contemporaries who seem to be lying liars who lie through their lie-holes.
For example, by far our best contemporary source for the major political and military events of Justinian the Great's reign is General Belisarius's lawyer, Procopius, who was the sort of lawyer who would make Saul Goodman look honest. Procopius was far too well-placed and wrote far too much to be entirely disregarded, and where we can cross-check him he seems to be pretty reliable about the details of stuff like the movements of armies and the progress of public works projects, but we're pretty sure he was lying about how Justinian's body took demonic form at night and his head would fade in and out of existence, and that leads us to wonder how much we can trust him when he talks about the sexual escapades of Theodora and Antonina (the wifes of Justinian and Belisarius, respectively).
This is a really good point. There's something ironic about old slurs against the masculinity of political opponents being used to elevate those people 2000 years later as LGBT representation.
That said Hadrian actually was gay, and did a reasonably good job. Caesar was apparently bi, and his name now means 'emperor'. So there are actual role models. :)
Joan of Arc was an extreme tomboy. Redefining "tomboy" as "trans" is not good, and conspicuously opposed to that thing I *thought* we were doing where actual girls were allowed to wear pants, code, play sports, and do all the other traditional guy things if they wanted.
Usurping command of the armies of France is, of course, generally frowned upon regardless of gender. But we'll make an exception if God himself commands it.
I mean, in that era, you were a woman doing woman things or you were a man doing man things. Joan of Arc wanted to do man things, so she dressed up as one. If you transported her to this era, would she be a transman or an aspiring bossgirl? I don't know how you would begin to answer that. The further back you go the less sense our categories make, and we're going back 1000 years here.
Joan of Arc was quite clearly not willing to live a conventional life for a woman, but I don't think there's evidence for more than that.
It feels somewhat like saying that any Chinese woman who objected to foot-binding was actually a man.
Thinking about the fall-injury incentives thing from https://slatestarcodex.com/2016/11/10/book-review-house-of-god/ is it possible that somehow adjusting the basis on which medicaid and other government programs pay for dialysis would motivate existing medical providers to throw their weight behind reforms?
Would it make sense for advertisers to aim at spaced repetition rather than apparently just buying as much repetition as they can afford?
They do. There are models that take into account saturation effects (after a while, there is no increased buying for extra ad views), memory/decay of the ad effect, synergy with in-store promotions, seasonality, etc.
Then you can run your favourite optimisation process to maximise future ROI, based on weekly spend patterns.
(It's part of my day job to build models like these.)
Seems like that's a fair strategy for promoting a brand, and you do see things like that. Coca Cola doesn't need to advertise every day but they'll still do the occasional big campaign and product placement drives to make sure they don't fade from the public consciousness.
I don't think they're able to know whether a specific person actually saw their ads at specific times.
I'm sure they can't, but I wonder if there's a way for large advertisers to play the odds.
Any native French speakers here?
The term "oratrice mécanique d'analyse cardinale" has been trending in the meme-verse lately. It's the name of a device in the game Genshin Impact. I'm trying to figure out whether the name makes any sense in proper French.
A straightforward translation into English gets me "mechanical speaker of cardinal analysis," which doesn't make much sense, particularly that "cardinal" bit. But maybe there is more going on here than my high school French skills can manage.
Not a native but I think I speak enough French and Chinese to explain this. It means: "Mechanical Speaker of Cardinal Analysis" though due to French gender rules the speaker must be a woman or otherwise grammatically feminine.
This is an attempt to translate 谕示裁定枢机 which means something like 'Oracle Adjudicator Machine'. However, if you translate it very literally it means 'Tell Instruct Decide Certainty Door Machine'. Tell-instruct became oratrice (speaker) because it roughly means oracle. Machine became mechanical. Decide/Certainty became analysis. And then they translated 枢机 as cardinal because, for whatever reason, the word for Cardinal in Chinese is literally 'door machine'.
So basically a bad translator. I'd translate it as "machine de jugement oraculaire." Or maybe more poetically "les balances oraculaires."
Traduttore, traditore.
(Translator, traitor. The phrase originally came from Italians displeased with the translation of the Divine Comedy into French.)
One of the benefits of being multilingual is you learn how bad many translations are. A lot of it is petty differences too. I once remember a translation that translated "icy lake" as "very cold lake." And my thought was: why not just translate it literally? Obviously an English speaker understands that 'icy' implies 'cold' just as in the original. But no, 'very cold.'
A DDG search returned this reddit post [0], which claims the virality is just frenchies being proud of the in-game pronunciation, and also because the cadence is just poetically pleasing to the ear.
DDG also returned this article [1], which says the object is a conscious, mechanical weighing-scale which issues legal judgements. "Cardinal" is probably just a fancy way of saying "math".
[0] https://www.reddit.com/r/Genshin_Impact/comments/17q58vw/french_poetry_and_the_oratrice_m%C3%A9canique_danalyse/
[1] https://www.gamingdeputy.com/understanding-the-cardinal-analysis-mechanical-speaker/
>A straightforward translation into English gets me "mechanical speaker of cardinal analysis,"
That's correct, with a detail and a caveat:
-"Oratrice" translates to "speaker" or "orator", but of feminine genre.
-"Cardinale" can refer to a vast number of things depending on the field it's used in, and sometimes to multiple things in a single field. Considering the complete sentence looks really, really like japanese using gibberish european to look cool, I wouldn't expect them to have had any specific meaning in mind.
Sounds like just strung together a bunch of unlikely things. "Oratrice" is feminine, so it would be a female mechanical speake, I guess like Siri or Alexa. According to google cardinal analysis seems to be an obscure theory in economics.
Reading the fansite for the weapon, looks like it's a morality weapon, so the "cardinal" is probably along the lines of "cardinal sin".
Except cardinal sins translate to "péchés mortels" or "péchés capitaux" depending of what you mean exactly.
Which tricks or skills have the highest ratio of how impressive people think they are to how long they actually take to learn?
Spinning with a drop spindle can be learned fairly quickly, but getting good at it takes time. But everyone who sees you doing it will think you're some sort of witch/wizard, which is pretty neat.
Getting good enough to impress people would probably take less time than getting your pilot's license.
Making things sort of fits. You only need to make one impressive thing and then just keep it around and new strangers will continue to be impressed by it, unlike other skills that might require continuous polishing.
People are impressed by the amount of poetry I know although I used to be able to learn a new poem very easily. That is no longer the case, the first evidence I noticed of memory decline with age.
Perhaps being able to tell for an arbitrary date on which weekday this was/will be?
With a bit of math affinity you can probably learn it in a day, and I guess a conversation "Oh, it's your birthday? How old are you? Oh, then you were born on a Tuesday." is pretty impressive to common folks.
I used to be able to astonish people by getting a salivary gland under my tongue to squirt saliva 2 or 3 feet. Somebody taught me to do it when I was a teen, and I just followed their directions and they worked. But I've never been able to teach anyone else how to do it. Everybody gets frustrated, and then some start just plain old spitting at their target. Sort of like in Harry Potter when some people trying to apparate for the first time pirouette and then deliberately leap out of their hoops.
I've been doing gymnastics and circus stuff for some years, although only on an amateur level. It is striking how much this concept shows up. Learning a backflip is actually surprisingly easy (for a relatively fit individual), especially into water. On the other hand learning to do a handstand requires a large and enduring effort. The backflip still induces more awe in people I would say. Among my friends we often joke that the backflip has the highest impressiveness-to-time-spent ratio. For the handstand it is also striking how one month spent on technique vs 2 years 'looks the same' to the uninitiated.
I think watching amateur circus stuff can give you some tips here. Although impressiveness is not one to one with entertainingness, a lot of the stuff they do on the scene do not require a lot of skill or practice, but often is simply daring or shameless.
Also Mike Boyd on youtube is a good source, his channel is only him learning stuff and recording how long it takes, and then you can decide for yourself how impressive things are.
I did gymnastics like 8 years ago and I could do both a backflip as well as a handstand. I’ve lost the ability to backflip but I can still walk on my hands fairly well funny enough
Jogging a marathon
You can look up your nearest flying club and take a 60 minute discovery flight with an instructor. "I flew a plane this weekend" gets you a fair bit of undeserved admiration. The trick is to not actually pursue the license because flying will eat up all your money.
Even getting a private pilot's license is more impressive than it is difficult. I seem to remember hearing that people manage to do it in two or three weeks of full-time lessons and study.
There are intensive courses that can get you the license pretty quickly. It's more expensive than it is difficult, I'd say, though you do need to go through a fair bit of theory as well as the actual flying.
About 60 flying hours and 120-180 hours of study is the average IIRC.
Yes, though that's mostly due to their taking drugs that aren't on the FAA's approved list. Which kind of has to be different than the FDA's, because the range of allowable side effects is different, but the FAA doesn't have the resources to investigate the entire modern pharmacopoeia. Psychiatric drugs are particularly problematic, for obvious reasons, and the FAA only recently put a few SSRIs on the approved list.
Of perhaps particular interest here, if you take the drug that requires you to lie to your doctor and say "I randomly fall asleep in the middle of the day", or the other drug that requires you to lie to your doctor and say "I often can't focus on important things that really need my attention", the FAA will quite understandably block you from acting as a pilot.
The Air Force has its own rules, different from the FAA or FDA, and they have their own doctors to guide them. And octogenarians are not categorically disallowed the way e.g. self-proclaimed narcoleptics are, because octogenarianism per se is not an impediment to safely flying an airplane. It is associated with a high risk of dangerous medical conditions, but that's what the regular medical exams are for - and I believe most pilots are screened out by the time they are 80.
Solving a Rubik's Cube.
DJing?
This week I've been playing Slay The Princess, a visual novel that has been going mildly viral and getting outstanding reviews. The premise made it seem like it would be a recreation of AI box experiment: the princess is locked up in a cabin, your job is to slay her, and she will manipulate, threaten or seduce you to stay alive.
(mild spoilers below)
Well it turns out it was less of that and more of a Stanley Parable crossed with Disco Elysium (which I should get to playing sometime soon). The game is essentially a series of vignettes, some touching and some amusing, connected by branching paths. The full playthrough basically requires you to backtrack and re-make your choices, so you can't really play a role of a prudent gatekeeper. Or, well, you can, but it leads to a joke ending and credits roll.
Those who played it, would you like to share your favourite route? (and why is it Razor)
Disco Elysium is a beautiful and extraordinary game. A feast for the senses! I encourage you to fire it up ASAP.
I may play more, and I liked the art and voice acting, but the story was so disconnected from reality I didn't feel anything about it.
I like when philosophy games have a Message, a Point perhaps, something that they actually want the player to understand or think about, while Slay the Princess seems to be devoid of that, and just throws options at you. It's still nice, a lot of dialogue is amusing, and it's certainly creative, but it failed to make an impression on me as well
Over the past couple of weeks I've been seeing an increasing number of articles on the subject of battery systems for renewable energy becoming price competitive with gas-fired plants (example link below). Given that intermittency of renewable energy has been THE sticking point in regards to the energy transition, that seems like pretty big news. Can anyone with more experience or knowledge in the subject offer some insight as to what degree this is hype or if we're on the cusp of a genuine shift in the economics of power generation?
https://www.reuters.com/business/energy/giant-batteries-drain-economics-gas-power-plants-2023-11-21/
For personal home use, solar + batteries has been economically viable for a while (as in, it will reliably pay off in under 10 years, and often under 5 years depending on specifics / tax credits). This typically uses lithium ion or lithium lead batteries from China.
But for grid use, we're very far from having a viable solution to the duck curve problem caused by solar and wind, and lithium batteries are nowhere near the price for scale that we need.
Li-ion batteries are suboptimal for grid storage except inasmuch as old ones can be reused near their end of life. Until something like Vanadium flow batteries are available at scale it’s hard to see how battery storage displaces most gas plants (at least in the US).
Interesting article on OpenAI with interesting final line: "no corporate structure, no matter how well intended, can be trusted to ensure the safe development of AI" - replace "corporate structure" with "AI design" or whatever and it applies (corporations being like AI in many respects) https://www.theatlantic.com/technology/archive/2023/11/openai-ilya-sutskever-sam-altman-fired/676072/
I'd like to find a sperm donor whose sperm increases the chance of various desirable traits: health, IQ, talents, looks, etc. Additionally, its extremely important to me to minimize the chances of mental health issues - because the egg will likely bring some. Finally, I'd like to increase the chances for things such as values being close to mine and overall usefulness/success in life.
1) The best approach to all of the above seems to be to know someone's wide family. If there's no history of mental health issues X generations down and across many people, that sounds like reasonable probability. The same for other traits. (Yes, I'm thinking a bit in the vein of https://www.astralcodexten.com/p/secrets-of-the-great-families ).
2) Do any official institutions (spermbanks) offer anything similar? If yes, would you have tips? If not, why not? Are there regulatory issues or is there just such low demand / high stigma?
3) Can you think of a better way to find donors than just get tips on Wikipedia, on these forums and through chain emails sent to competent friends who know competent friends and then doing deep background checks on their families?
I have donated sperm (to LGBT couples, by refrigerated shipping in a transport buffer), and I have a full genome sequence available. I would consider myself to be intelligent (>99th percentile by standardized testing) and talented. If you are interested please email me at [my username]@protonmail.com
I believe we live in the same town. If you and Anonymous decide to do this and you need a discreet person to help some way, let me know.
I got curious about this an googled "High IQ sperm donor". Found these 2 listings, which of course cannot be trusted without investigation.
http://www.scientistdonor.com/?gclid=CjwKCAiAx_GqBhBQEiwAlDNAZkS1xQkHZSXQ8SyTfUzqEIn1gANwtBqb8LqXQb7frBSHb8XnQbBdVBoC8_YQAvD_BwE
https://www.londonspermbank.com/catalogue/products/donor-1295/
Also found a couple sperm banks that tell you whether donor has a degree beyond BA or BS, and one that does genetic testing of donors, thought not for genes thought to be related to intelligence.
In the past there were some sperm banks in California where you got some information about the donor -- I believe it was height, hair color, highest degree attained plus a statement the donor wrote. The most well-known one was California Cryo. They were willing to ship sperm in some kind of special tank that kept it cold. Don't know whether it still exists, or whether there is now any place where you get more information.
I know someone who placed an ad for a sperm donor on the campus of high-prestige university that happened to be near her, and about half a dozen student answered her ad. The ones she selected underwent screening for STD's, which the woman paid for. She did not tell the guys her name, and they were comfortable with that arrangement. I'm not sure what she paid them, but it wasn't a lot -- I think something like $100 per donation. (She did not have intercourse with them -- she gave them sterile containers to put the sperm in.) As far as I know, there was nothing illegal about any of this. I think there's a reasonable chance that most guys would answer honestly if you ask them about having suffered from serious mental illness. (I'd say things like anxiety and low-grade OCD and some depression after a relationship breakup really do not count. You want to know whether they have had a bipolar episode or been psychotic. Of course, undergrads and most grad students have not yet passed through the age of maximum risk for having a first episode.)
Also heard a rumor there was a "genius bank," selling the sperm of men who'd achieved at a high level. Don't know if it's true.
I think there are probably a number of sane, pleasant, smart men who would be willing to simply donate some sperm to you -- in the same spirit as Scott donated a kidney -- just to help out somebody in need with something they can spare. I'd try asking on here, actually, next time there's a classifieds. Wherever you ask, I recommend you offer to sign either a formal or informal document totally letting the man off the hook for any responsibility for the child. You might also want to man to agree never to contact the child and introduce himself, unless you'd be OK with that.
Oh, one other thing: I wouldn't bother with deep background checks. I'd say it would be enough to ask the man about whether he or any of his first degree relatives (parents and siblings) have any of the common conditions that are both heritable and really bad news to get: bipolar illness, schizophrenia . . . Look up what the most heritable serious ental and non-mental diseases are. I don't think looking for things like crimes and bankruptcies in the family history will get you much. For positive traits, you can ask about talents, life achievement and highest degree attained. I expect most people would give honest answers about these matters. It's not like they're going to get rich with this "job" you're offering.
"Sperm banks handle this through donor anonymity" Even if anonymity had not been challenged in court, I'm skeptical that it would be a long term solution. DNA sequencing has gotten remarkably cheap so I doubt that sperm donor anonymity can be permanently protected.
You may be right. I'm just guessing from the gradual general increase in surveillance that everyone's DNA will probably wind up in some database eventually.
Sounds like yet another case where the "right to waive your rights" would be useful, per the old SSC article https://slatestarcodex.com/2014/11/05/the-right-to-waive-your-rights/
Apparently, "does not worry about being sued for child support" is a trait strongly favored by evolution these days.
What if the donor stays anonymous? Would that work? There's no reason for OP ever to learn his name or address or see his face, even if they have extensive conversations prior to the man donating. And once that's all settled, a friend of his can deliver the container of sperm to OP.
What about give symmetric right to the same amount of money from recipient to donor? (probably doesn't work, otherwise everyone would do it)
Why can't the donor and the recipient get around that prior to the donation by signing a child support agreement for a penny per year? Donor then deposits 18 pennies with the recipient.
My understanding is that courts consider child support to be a right of the child, not of the parent. Consequently, any such contract is worthless because the child did not agree to have its child support payments reduced by 99%. There's nothing stopping a parent from just not complaining about not receiving money, but if they change their mind after seeing how hard parenting is then no piece of paper is going to stop them.
I do not agree that the most intelligent and talented people are neurodivergent. Just hunted for info online. In general, people on the autism spectrum do *less* well than normals on IQ tests. When researchers limited their investigation to math ability in high-functioning people on the spectrum, i.e. kids diagnosed with Asperger's, some researchers find these kids are better at math than average, some found them not to be. While Aspies do not seem to be a lot better at math on average, it could work in the other direction, with a disproportionate number of mathematicians, chess wizards, etc. being on the spectrum. It may be true, it may be an urban myth. But in any case, there are a lot of fields in which one can be a genius: Biology, music composition, philosophy, writing . . . I've never heard people suggesting that genius in these other fields is associated with neurodivergence, and that has not been my observation in real life. Many people who had very high achievement in these other areas seem to have been sociable, flexible, and to lack that "system-builder" quality that's characteristic of people on the spectrum.
I have the following model: in order to be noticed for your genius (because you won a competition or are a renown professor or whatever), you usually need to be at least moderately world-savy / neurotypical. However, if you are absurdly intelligent / whatever, you may be able to coast on that and be noticed as a genius even without those other traits. Then a neurotypical person on the 90th percentile may be as notorious as a neuroatypical person on the 95th percentile (I'm making up the numbers). If this model holds, the neuroatypical people we laud as geniuses would be genuinely more intelligent / whatever than their fellow neurotypical geniuses, but that wouldn't mean being neuroatypical is an advantage.
Sticking out from the average in any way can make life hard. We are social animals, other kids sense something unusual about you, and you become the weird one. Plus, if you're unusually skilled at thinking, it's easy to lag behind the norm in other areas like emotional maturity and personal discipline, well into adulthood, which sucks pretty bad. Not speaking from experience or anything...
Look, it's a lot more complicated than Wikipedia thinks. Here are some of the complications:
(1)If you say someone is creative you can mean they're an artist of some kind, or you can mean they literally have high ideational fluency. The owner of a chain of drugstores can be quite creative with how he sets them up, or staffs them, or advertises them. If people in the arts have a higher rate of mental illness, you need to take into account the fact that it is extremely hard to make a living in the arts. It's a hard row to hoe. These people lead difficult lives. I have never seen a shred of evidence that people who simply have high ideational fluency -- people who can think of a lot of uses for a brick in 5 mins, who can come up with a clever, novel way to solve a puzzle -- have higher rates of mental illness. In fact I'd guess that in general they are more successful than their peers in non-arts professions, and lead easier, more gratifying lives. Being quick to think of original ideas is an advantage.
(2) In general, mental health and intelligence are positively correlated. But I'm not saying there's nothing in what you say. Here's what I think is a sophisticated, fair-minded summary:
"The persistent mad-genius controversy concerns whether creativity and psychopathology are positively or negatively correlated. Remarkably, the answer can be “both”! The debate has unfortunately overlooked the fact that the creativity-psychopathology correlation can be expressed as two independent propositions: (a) Among all creative individuals, the most creative are at higher risk for mental illness than are the less creative and (b) among all people, creative individuals exhibit better mental health than do noncreative individuals. In both propositions, creativity is defined by the production of one or more creative products that contribute to an established domain of achievement. Yet when the typical cross-sectional distribution of creative productivity is taken into account, these two statements can both be true. This potential compatibility is here christened the mad-genius paradox. This paradox can follow logically from the assumption that the distribution of creative productivity is approximated by an inverse power function called Lotka’s law. Even if psychopathology is specified to correlate positively with creative productivity, creators as a whole can still display appreciably less psychopathology than do people in the general population because the creative geniuses who are most at risk represent an extremely tiny proportion of those contributing to the domain. The hypothesized paradox has important scientific ramifications." From The Mad-Genius Paradox: Can Creative People Be More Mentally Healthy But Highly Creative People More Mentally Ill?
https://journals.sagepub.com/doi/abs/10.1177/1745691614543973
(3) If you have some doubts about what I'm saying, look up what careers have the highest suicide rate. Here's what I found with a quick google: .
1. Medical doctors
2. Dentists
3. Police Officers
4. Veterinarians
5. Financial Services
6. Real Estate Agents
7. Electricians
8. Lawyers
9. Farmers
10. Pharmacists
Notice there are no poets, painters, musical composers or playwrights?
"Idea fluency" reminds me of "Beautiful Mind" by Brian David Gilbert.
https://www.youtube.com/watch?v=3w1wwGcu0Dk&ab_channel=briandavidgilbert
Round and round the OpenAI drama goes, where it stops...
https://arstechnica.com/tech-policy/2023/11/reports-sam-altman-in-talks-for-openai-return-board-members-could-be-ousted
https://www.bloomberg.com/news/articles/2023-11-21/altman-openai-board-open-talks-to-negotiate-his-possible-return
Eli Dourado posts this great dichotomy on X:
"The world is not that complex, reductionism works, intelligence is basically what matters, world optimization should be tried, all it takes is high agency people with the right values.
OR
The world is very complex, marginalism is what works, intelligence alone isn’t worth much, tacit knowledge and experience and tradition are valuable, smart people thinking they can optimize the world is hubris and inevitably leads to failure or worse."
Which do you think has more truth value? I think I'd go with 10/90 former/latter. A good response I saw says: "first one locally, second one globally".
https://twitter.com/elidourado/status/1726730831048130593
I reject this as a false dichotomy:
"The world is complex, reductionism works, intelligence is basically what matters but intelligence alone is necessary but insufficient, world optimization should be tried, all it takes is high agency people with the right values but tacit knowledge and experience and tradition are still valuable."
The right answer. It is not A, nor B, nor a linear combination of A and B.
Also, "the world is complex"... compared to what?
I've seen broad questions, but this one is on a level of its own. In a couple sentences you've invoked philosophy of science, epistemology, complexity theory, economics, anthropology, history, ethics, and probably more.
I think the main word you're looking for is "emergent complexity". My quick answer is the good old middle way: both angles are important, and getting the right balance between them for the problem at hand is even more important.
When your low-level theory is good enough, reductionism works, but in a kind of hollow way. There is nothing about what a car does that could theoretically not be simulated at the level of fundamental particles and force fields. But there is a lot of information in a car that can only be understood at much higher levels of explanation. The fact that it's designed to fit humans, themselves produced by evolution and steeped in cultures. The need to not only successfully carry them places, but subtly make them feel powerful and safe. The assumption of a steady supply of refined hydrocarbons to burn, and paved roads to run on. The pressure for more fuel efficiency and lower emissions. The cultural and environmental constraints that make Americans buy huge cars and Japanese tiny ones. The memetic trends that produce preferences in color and shape, and so on and so forth. No amount of looking at the fundamental equations of physics could give you a hint that these things would appear.
The larger the ambition, the more you need to have a good level of knowledge of many of these levels. Not just theoretical knowledge, but the kind that internalizes as gut feelings, which means you're also enlisting the help of the huge part of your cognition whose inner workings are not visible to consciousness. But the world is complex, and there is a strong tend towards specialization, so any of us will probably seeing the whole picture through the partial angle of whatever layers we're most familiar with.
Bid advances come from the rare ability to reach up and down simultaneously. Turning sand to CPUs requires going down to the quantum level, but being able to sell those CPUs requires marketing which is basically applied mass psychology. At the highest level is the emergent behavior of large groups of humans. Human nature at scale is what made the green revolution a success, and communism a failure. Sometimes we go for grand goals, and hate the results. At every level there is uncertainty, you literally don't know until it's been tried.
Sam Kriss's article on René Girard makes the solid point that wide-ranging theorization has fallen out of fasion in the last century. "A century ago, intellectual life was dominated by brilliant, charismatic, but slightly daft theorists, people with intense tunnel vision, such as J. G. Frazer and Rudolf Steiner and Sigmund Freud. Today there are almost none of these thinkers, and the world feels poorer for it. Wouldn’t it be more interesting if we had hundreds of René Girards, each working away on their own vast theory of everything, interpreting all of history through one idiosyncratic insight?"
"The novice knows how things can go right, the expert knows how things can go wrong". I.e. the mechanics may be simple in hindsight, but the state-space and mechanism-space is often larger than you think. E.g. everyone thinks they understand Newton's 3 Laws of Motion until they see the Tennis Racket Effect. It's simple, reductionist, and completely bewildering.
I say: causal-reasoning for the well-understood, effectual-reasoning for the frontier.
P.S. what does "marginalism" mean in this context. scientific iteration? supply & demand? something else?
1) The world is incredibly complex.
2) Reductionism works in the sense that it has been extremely successful as a research paradigm. That's not the same thing as believing that all phenomena can be explained solely by lower-level processes.
3) Intelligence is what matters. There's a reason we rule the world and chimps don't.
4) This one's a bit complicated. One the one hand, I do think there's a decent argument to be made that things like "tradition" and tacit knowledge represent a distributed information processing system that allows for solutions to local problem to be developed without requiring any one person to fully understand why they work. The problem it runs into is that it's essentially a Darwinian process, slowly building a homeostatic system in response to signals from the local environment. As as is well known, evolution does not and cannot plan for the future. It can only respond to conditions as they are, and when those conditions change too rapidly the result is usually organism death. So while we shouldn't be too quick to dismiss the potential knowledge to be found in tradition, we need to recognize that conditions which gave rise to it and which gave it it's adaptive function may no longer exist. "Trust, but verify".
As for whether smart people thinking they can optimize the world is hubris, of course it is. That doesn't mean we shouldn't attempt to do so. Sometimes, we really do make the world better.
3) Dolphins don't have much chance to rule the world. Intelligence can MATTER, certainly, but it isn't necessarily the most important thing.
How much does tradition overlap between cultures?
Depends on the cultures we're comparing. As I said, I tend to think of tradition as a locally-optimized solution for a problem that a society has come to over time. So whether different cultures overlap in terms of their traditions would depend on the degree to which they've been exposed to similar problems and have independently arrived at similar solutions to it. I have no idea what the answer to that is or even if anyone has looked at that.
:- The world is very complex.
:- There are ideas that work that could be plausibly described as "reductionist" and/or "marginalist", but both terms are usually to vague to be helpful.
:- In most fields intelligence is valuable but soon hits diminishing returns and is far from sufficient, while knowledge and experience are vital and have much higher ceilings. In a few areas (e.g. pure maths) intelligence is much more important, but even there experience is easy to underrate.
:- Smart people thinking that they can optimise the world is hubris. To see this, look at all the things that humans disagree about, and observe that for most of those the correlation between which side you're on and measures of intelligence is very weak, and it's often pretty much zero. Conclude that for a lot of questions intelligence is clearly not sufficient for working out the correct answer.
I strongly agree with this, especially the last two paragraphs. But really the whole thing.
Hmm, should I snark about the question before or after answering the question? I think... before. Ahem: As worded, 1 is the only viable answer; trying to answer 2 renders the dichotomy too reductionist to be able to answer 2. Seriously, how are you gonna go all-in on marginalism?
I'd say 50/50. The world's complexity depends on what you're trying to do with it. Experimentation is what works. Knowledge, experience and tradition are valuable in saving time on your experimentation, but cannot replace it. Trying to optimize often leads to failure, but failure can then lead to success.
https://www.smithsonianmag.com/innovation/7-epic-fails-brought-to-you-by-the-genius-mind-of-thomas-edison-180947786/
I'd go with 40/60. The world is very complex, incentives are what works, intelligence matters a great deal, tacit knowledge and experience and tradition are somewhat valuable, smart people thinking they can optimize the world sometimes works and sometimes doesn't.
To give an example, look at medicine. We think of bloodletting as medieval, but it was common into the 19th century. This was one reason why homeopathy became popular: it was doing nothing while "real" medicine was hurting people. Smart people applying a simple idea, "medical interventions need to be tested against a placebo" beat the force of medical tradition which had been building up a massive body count for thousands of years. And it's not like people in the 12th century couldn't have done it our way because they lacked the tools to make the tools to make the tools. Comparison studies could have been conducted then, though they didn't have all our statistical tools, "eyeball the chart" would be better than what they were doing.
Just recently I heard a perfectly regular doctor give a plausible reason for bloodletting: too much iron in the blood is quite common, and bad for you. If the body cannot regulate it down, getting rid of some blood is the easiest fix. He suggested donating blood as the modern alternative. I haven't fact-checked any further, just repeating something I heard from a specialist who didn't look like he had axes to grind.
I thought too much iron was the result of a rare genetic problem, but maybe that's *way* too much iron.
Yes, hemochromatosis https://www.mayoclinic.org/diseases-conditions/hemochromatosis/diagnosis-treatment/drc-20351448
The latter, all the way. The former is just Top. Men. And even disregarding the not-great track record that has in the real world, it creates *huge* incentives to fake being one of those Top Men. And like all autocracy forms (and make no mistake, that kind of world-optimization plan relies on those at the top having the power to compel others to follow them, which is just as autocratic as a regular dictatorship), it quickly degenerates.
And there is not a single person I'd trust with that kind of power.
That's not to say that intelligence isn't valuable. It's morally flawed to do things you know are stupid; it's also morally flawed to give extra power to people prone to doing stupid things. But the diminishing returns to actually solving real problems or governing real systems are real, and kick in somewhere just above "normal". The world (or even any meaningful piece of it) is too large and too interconnected to be held in any one person's brain. Or even any finite set of people's brains. Much of it appears irrational, mainly because we can't see all the factors.
> it creates *huge* incentives to fake being one of those Top Men.
I thought the question was about how things actually work, not about what beliefs are more socially desirable. Isn't it one of the basic tenets of rationality to keep a clear distinction between these? There's already too much of "X can't be true because it would be bad if people believed it" out there in the world already...
First, I find the phrase "tenets of rationality" to be slightly...revealing. Religions have tenets.
The relevant part of the quote for that section was "world optimization should be tried, all it takes is high agency people with the right values." And that's a should-statement, not an is-statement. Overall the initial "dichotomy" (which I agree with others is more complex than that) was a mix of statements about present reality and statements about how we should structure society.
I’d put myself on the map as “The world is “””not that complex”””, reductionism works, political will is basically what matters, world optimization should be tried, all it takes is high agency people with the right values.” There’s no shortage of intelligence out there, there’s a shortage of cohesion and consensus, which basically are created by leaders who build them up.
At the margin I would guess the median American needs to update toward the former and the median ACX reader needs to update toward the latter.
Nice formulation! :-)
I don't think so. My impression is that Burkean pro-tradition/experience conservatism (the second part of the dichotomy in a nutshell) is way more common on ACX than in the broader public or almost anywhere else. The median American is more "trust the experts or you're a dumb uneducated hick".
The median American trusts the subset of experts telling them what they think they already know and/or want to hear. This almost by definition results in marginalism, and the rest of the second package. Hnau has the gist of it.
I interpreted his post as saying that the American public should believe the first proposition more than it currently does (ie, should trust the experts more) and ACX posters should believe the second proposition more than it does (ie, the value of tradition and experience). The aggregate political result (slower change due to roughly 50/50 split between the political tribes) is besides the point.
I know that I've massively missed the bus on this, but is anyone else annoyed by the historical inaccuracy of calling the "rationalist community" as such? In the history of Western philosophy, the divide between rationalism and empiricism is one of the main splits, and the modern "rationalist" movement clearly falls on the empiricist side. Empiricism was about basing your view of the world on sense data, which is what modern "rationalism" does with its focus on Bayesian updating as the core means of knowledge acquisition. Meanwhile, actual historical rationalism held that if there was a conflict between your preconceived internal ideas about the world and your sense-based observations, instead of updating your internal ideas, you held that it was your senses that were wrong. This is how you got stuff like the Eleatics (who were essentially proto-rationalists) holding that change didn't exist despite change being observable at every moment of existence, or Leibniz holding that this was the best of all possible worlds despite all the easily observable evil in it. As you can see, this is the complete opposite of the epistemic system advocated by modern "rationalism". If I were to come up with a more accurate label for this movement, which I know it's much too late for, I'd call it Bayesian Empiricism, or maybe Neo-Empiricism. Anyways, that's my rant, I know it likely won't change anything but I had to get it out there.
> Leibniz holding that this was the best of all possible worlds despite all the easily observable evil in it.
He wasn't saying that this world was *good*, he was saying that every other world which could possibly have occurred would have been *worse*.
There are several different traditional meanings of "rationalism", depending on whether we talk about philosophical rationalism, theological rationalism, political rationalism, etc. Specifically, the political rationalism refers to things such as rational choice, utilitarianism, secularism, which is quite similar to the values of the "rationalist community".
(There are even more confusing words out there; for example, according to Wikipedia, there are over 50 mutually contradictory meanings of "realism".)
Empiricism and Bayesianism are not precise either; they are just parts of the whole thing.
Basically I agree. But have you not noticed Scott regularly discount entire scientific papers because of strong priors? When your (ideally) Bayesian-derived knowledge gets solid enough, parts of it start looking like good old (proper) Rationalism. See for example "The control group is out of control".
I'm not the least bit annoyed when a community I sort of respect tells Western philosophy that it's been getting things wrong for most of its history. Rationalism is a poor name for what Descartes et al were talking about, and a pretty good name for what Yudkowsky et al are talking about.
It's true that "rational" is sort of a weasel word that can mean all sorts of things. Though, I can't think of a better (i.e. more specific?) name for Rationalism qua Descartes than what was historically chosen.
What would have been a better name? I've never been able to think of one, though I've often tried
Wait, what about SAICS? Smart Aspie-Influenced Common Sense.
Yup, this annoys me.
I try to mitigate the annoyance by saying "empiricism is rational, therefore empiricists are rationalists" and similar, but it doesn't really work
luckily the whole empiricism vs rationalism debate is hundreds of years obsoleted by now and only us history-of-philosophy nerds even notice the annoyance
Somewhat relevant: https://slatestarcodex.com/2014/11/27/why-i-am-not-rene-descartes/
Yes, I know others have been annoyed by this. I remember reading some major columnist (Ross Douthat maybe??) noting that internet rationalism has more in common with philosophical empiricism than philosophical rationalism.
https://www.politicshome.com/news/article/boris-johnson-bamboozled-scientific-advice-covid-pandemic
Discussion of Boris Johnson not understanding exponential growth (very important for policy on COVID) or that science is a process of learning.
I'm wondering want a Science for Politicians course would include.
Just to be utopian and science fictional about it, they have to pass the course in order to hold office.
Great idea -- and while we’re about it, how about a “politics for tech leaders” course as well? I’m looking at you, OpenAI board...
Politics for EAs would have been useful to avoid wasting so much money on that doomed candidate in Oregon.
"UDT And Why You Should Not Give In To Blackmail For Tech Leaders" course
It is very difficult to teach a person something when their livelihood often depends on not learning it.
My proposal was for this to be taught in college, before people are in office, and enforced as a requirement for office.
Yes, I am sorry I was making a joke. Politicians often have to run for office and get votes amongst a lot of people that might not be that keen on some of the things you want to teach them.
I don't envy politicians much. It's really hard to communicate when everything you say will be reduced down to a four word sound bite and interpreted in the most hostile manner possible.
There's also the effect Paul Graham described where it is impossible to communicate more than 1 bit of information to a large audience.
I mean, how hard is it to get "ingroup goooood" down to 1 bit?? They've got it easy!
It would have to start with basic arithmetic, surely.
Distinguishing "million", "billion", and "trillion" for public officeholders (and the journalists who question them)...
As I said on the other comment, before you can do even this much, you'd need to explain that numbers actually refer to things external to people's opinions of them. For example, if I say there are 1.5 million cars in the world, and you say there are 1.5 billion, then it is not merely the case that our statements are substantially different -- it is also the case that we can actually go out and check. There actually exists the correct answer, regardless of who is on whose side in which political/religious/whatever debate.
Agreed! ( There is some unavoidable fuzz from corner cases. E.g. Does a car which has just been in a collision but has neither been assessed by a mechanic nor by an insurance adjuster to decide if it is a total loss count? By this is minor, and almost everything has error bars. )
Got out and check. I'll wait.
I wonder how long it would take you to count all the cars in the world...
Not that long:
https://www.google.com/search?client=firefox-b-1-d&q=number+of+cars+in+the+world
Of course this is not an exact number, but rather an estimate based primarily on sales figures; thus, it has pretty large error bars attached to it (metaphorically speaking).
100 yes to this! In another thread many words have been spent discussing minutiae of subharmonics “generated” by CD players. A simple lab measurement would demonstrate their nonexistence. But why find out objective truth when one can idly speculate? /mild sarcasm.
One way I’ve found to make those kinds of numbers seem more real is kind of a toy thought experiment, fun to think through if never come across it before.
Let’s say you meet a very wealthy and eccentric person, they decide to give you a lot of money, which is great, but only $1 dollar at a time (eccentric, natch)
Let’s say they hand over $1 bill every second, and they do this continuously, without a break for eating or sleeping or explaining how they happened to have such a large supply of dollar bills
How long will it take for them to give you $1 million dollars? How much longer for 1 billion? 1 trillion?
Yup, or even just abstractly phrased as "about how long is a million/billion/trillion seconds?".
Now, the Fermi question that I cannot answer is: What fraction of the general population, and what fraction of our rulers, can answer this question (reasonably) correctly?
I'd put that closer to estimation than arithmetic, and I think estimation is a very important skill.
Agreed! Answering "Fermi questions" https://en.wikipedia.org/wiki/Fermi_problem is a valuable skill.
I remember the "Future Strategist" podcasts by James Miller with Gregory Cochran, back in early 2020, where Cochran claimed in his seemingly-overconfident fashion that the UK government didn't know anything and were making stupid decisions based on what they wanted to be true rather than what was true. Miller gently pushed back, putting his trust in markets and governmental advisors, but Cochran was all, like, "nope!".
**sigh**
I once enjoyed a book called "Physics for Future Presidents" that was written in this spirit albeit about a subject that has lost cachet as something to worry about.
Yep, my parents bought this book for me, and it is exactly the book that answers OP's question
Yeah, and they have to understand tech too. I'll bet half of congress doesn't know what RAM is.
I'm not sure whether knowing about RAM is all that important.
Let's limit it to the 5 most important scientific concepts for politicians-- that's probably as much as can be covered in a one semester course, and remember that this is for people who aren't naturally good at such things.
Maybe there can be another 5-topic course for technology.
The article mentions exponential growth, and I think it could reasonably be expanded to getting an understanding of s-curves.
Evolution is another crucial concept.
Probably include that science is a process of figuring things out. Some parts are well-settled, while others are more likely to change.
That's three already. Other suggestions?
Angular momentum as related to kissing babies?
Nah, none of these will work. The most important concept, which is a foundation for all the others you listed, is that there a). exists an external reality, b). which cannot be changed by mere words, and c). science is a technique of using numbers to describe this reality in a very precise way that simply cannot be achieved with words.
I know this might sound basic, but most people in general (and especially not politicians) have not internalized these concepts. The boiling temperature of water at sea level is not just some social convention or a captivating story or a talking point or a popular turn of phrase; rather, it is the outcome of a sophisticated model that is tied into many other models, and the model works so well that we call it "true". Water really does exist, and it really boils at 100 deg C, and no amount of speechifying will change that. You could take all your thermometers and throw them away or relabel them, and water would still boil at the same temperature; you just wouldn't be able to measure it anymore.
This is a really powerful idea that sounds deceptively simple, but is actually very difficult to fully comprehend -- otherwise, we'd have no need for the scientific method.
The problem isn't that politicians don't understand the idea of a physical reality. Everyone understands that, no matter how thick (e.g. ask them whether they'd put their hand on a hot stove or go without eating for weeks or whatever).
The problem is that in some cases, social dynamics are more important than physics, and politicians are embedded in those spaces. The laws of physics don't win you votes, being popular does. But it's not limited to politics either. The real hubris of Rationalists is assuming that people don't matter.
I agree. But how difficult to understand depends a lot on the context: in your example, it's quite easy to accept that water boil at 100°C, except if you gain something from water boiling at less or higher temperature, or that it depends on the moon phase.
I would add a bb) The external reality must be assessed by direct or indirect measures, never by how good or bad it will affect you (or anybody). This, imho, is the harder thing because of the natural tendency of using scientific description of reality as ammunition for advancing your particular cause or interests. Anyone is pro-science, but when reality does not advance (or worse, weaken) your cause, not so much anymore....even scientists :-)
I agree that RAM itself is not important -- it's more a sort of marker. About the science course. Well, I think the 3rd one is way too general. Anyone with an intro-course grasp of any branch of science will have run into compelling evidence of it, and will get the point. I think I would vote for giving politicians well-taught courses in 5 things they are likely to be called upon to understand in the present day, rather than a general grasp of big concepts.
-how epidemics work
-how hatred of other groups works, and what promotes development of alliances & tolerance
-how the economy & financial systems work
-effects of tech on human lives -- good and bad
-what constitutes progress -- there are different views
Seems like you could do a decent job with each of these with a few articles of about the length and difficulty of articles in the Atlantic or New Yorker, each followed by a discussion led by instructor. Those who wanted to learn more could be given a list of good sources.
Edit: I get that these are not exactly science courses. But you could teach each course in a way that brings in a lot of science. Even the question of what consitututes progress could include lots of data using various measures of progress: Percent of population below a certain standard of living, fraction of world population engaged war, frequency of suicide, happiness polls
How epidemics works seems like a very advanced subject, given that experts on the topic did quite a poor job at predictions.
More than politicians sucking at mid-level math (hardly surprising given the popularity of "I hated math in college" during any celebrity interview), it's the incredible confidence and ego-boost you got during early covid when mastering exponentials and ODE with 2 variables. Scientist and engineers were all over Youtube with vulgarisations about exponential growth (It double every 5 days! how long before all people on earth are contaminated? let's talk about the wonderful logarithms!) or simple epidemic modeling using compartment ODE like SIR (to show that you were a true genius, above even the already impressive exponential masters).
I must confess I got a moderate ego boost cause I can do all that and easily follow all those new youtube stars (the contrary would be problematic earning a living in Computer Aided Engineering, although sometimes you can be surprised at how well people master things absolutely mandatory for their job), something I am not so proud now after I saw how the covid crisis was dealed with by those "experts":
Those super simple models were never questioned, validated, improved, or discarded as unfit once they failed to provide prediction accurate enough for defining policies. And fail they did. But they were used to justify ridiculous measures that fly against even a modicum of common sense (wear a mask when hiking in the forest, you terrorist punk!)
So before throwing stones at Johnson for not understanding what exponential growth is (or fractions from what I know), maybe there is serious work to do in the garden of those who understand exps and logs.
PS: not that I am against throwing stones at Johnson in general, but let's do it for more problematic issues than being bad at math or maybe plain stupid, like not following the social distancing measures he himself did mandate for example....
Agreed on the importance of each of the topics you've picked.
A bit skeptical about
"Seems like you could do a decent job with each of these with a few articles of about the length and difficulty of articles in the Atlantic or New Yorker, each followed by a discussion led by instructor"
E.g. I don't remember whether the intro economics course that I took as an undergrad was a half year or a full year, but I'm reasonably sure that it was perhaps a factor of 2 or 3 longer than you suggest.
I agree. But politicians are busy and impatient, some are not well-educated at all, and some are not very bright. Was trying to think of something minimalist that would work for people have have not sat down and read something a bit novel and challenging in quite a long time, if ever. Teaching the equivalent of 5 New Yorker articles on a given subject would constitute a gigantic improvement in their grasp of issues related to subject. There could be an optional phase 2 where politicians can earn certification in each of these subjects, and courses for certification courses could be a semester or even a year long, and involve homework and final papers. What u think of that?
That sounds very reasonable. Many Thanks!
This week I have been thinking about finding a voice. I have experimented with writing in a few different styles this last year and what I have seen is that my writing style always reverts to something that sounds like a generic magazine article, and is quite plain. This is not all bad, as I can now produce a lot more words of above average quality on demand, but it feels to me that the next step in my writing would be to focus more on the execution and the details, like word choice and sentence structure, or whatever else will help me express myself more in my own voice
I read something recently that seems relevant; something like, “you’ve found your voice when you know which criticisms you can live with.” Like, as soon as you break out of bland genericism, you’re laying yourself open to some sort of criticism. And it’s the paralysing thought of this criticism that keeps you on the generic straight and narrow. And you know you’ve found your voice when instead of thinking “oh god, is this too flowery/plain/whatever?” you’re like, “some might think that, others won’t, it’s my voice, so be it.” (Obviously there’s a balance here, sometimes your inner critic is right, etc.)
"which criticisms you can live with.” That's excellent, generalizes to any creative endeavor.
I like that thought
Change your literary diet. Read way less magazine and blogs that sound like magazine articles, and binge on prose masters. Here's a list of people whose essays absolutely delighted me: Virginia Woolf in *The Common Reader*. George Orwell. Tom Wolfe in *The Kandy-Colored Tangerine-Flake Streamline Baby.*. Dwight MacDonald in book *Against the American Grain.* Edmund Wilson. Gore Vidal. Daniel Dennett. Oscar Wilde, *De Profundis.*. Ruskin. Alexander Pope, essay called *Peri Bathos*. (18th century, but very readable. This essay made me laugh so hard I cried.). On Substack, I think Sam Kriss writes very well.
Try out a bunch of them, and then keep going and read more of the ones whose prose you especially enjoy. If you absorb some style from several you'll be in less danger of becoming a simple imitator of one.
To loosen up, try writing when stoned or drunk. Or record your thoughts on audio, then turn them into prose without fully cleaning them up.
Later addition: Was ruminating about your topic, and another idea came to mind. Start off your articles in ways that magazine articles never start, as many different ways as you can think of. That will help you start off without having a magazine article mental set. So you can start off with
This is my biggest secret.
Shut the fuck up and hear me out.
You are about to experience my deepest and craziest thoughts.
Yeah, you're smart, but so am I.
You are one of the few people who can fully grasp what I'm about to tell here.
NOW HEAR THIS:
Don't worry for now whether you can use such beginnings in actual articles.
Oh I love that idea! Definitely could spice my literary diet a little bit. I don't know why but I just assumed that no good essays were written before the Internet came along. Thank you for this!
This reminds me of Étienne Fortier-Dubois's idea of human writers priming themselves with AI-style prompts. ( https://etiennefd.substack.com/p/prompt-engineering-for-humans ) I've never tried this – my problems are more fundamental – but it's an interesting tactic.
This sounds interesting. So what are your problems?
I have something of an anti-ear for prose. To continue playing with the idea of prompts, all the magic sauce that goes into the positive prompt to get the universe to spit out a Vonnegut or an Updike or a Leonard goes into the negative prompt that gets you me. At comparable weights, I'll proudly add.
Consider the two sentences in my reply to Eremolalos. That first 'this' ought to be a 'that', it would have been trivial to avoid the close repetition of 'of', and those parenthetical en-dashes, so awkward in such a short declaration, could be dispensed with by means of a simple 'because'. I am Nabokov's Ilya Borisovich Tal made flesh.
But this is a thread about you, not me. Of the three latest essays on your Substack, the one on shrooms has great style, the endorsement of Leahy is not far behind, but in the most recent one on the Twitter rep system you lapse into a more mechanical explicatory mode with more Slavicisms. (I don't speak Bulgarian, but I do speak Russian, and I think I recognise the temptation to write e.g. 'people who are to your liking' because it's closer to которые вам по душе than the more idiomatic 'people you like' – that 'who' is pretty strongly felt to be necessary.) It could have used a little more metaphor, a little more colour, a few more rhetorical questions of the kind that made the other two swing.
Just one rando's opinion, of course, but meant constructively, and perhaps of some value as you tune the voice.
Wow, that was such a great reply! And you actually took the time to read through some of my writing, I'm blown away! Thank you for this! The Twitter piece is the one which I spent the least time thinking about, and clearly it shows. Thanks again for the feedback
Are you writing fiction/stories, or are you writing articles?
I'm writing articles. Sometimes I would do a more fictional piece but it's mostly articles/essays
Is the way you talk different from the way you write?
Yes, quite different. Also, my first language isn't English and I think this affects my writing (in English) somewhat.
Perhaps there's some way to build on your literal voice to have a voice for your writing.
Yeah, I've been thinking about that as well. I guess it's gonna come down to practice in the end, so I am just trying to keep on writing the same amount as before but experiment more with the voice
I see more information about JFK's assassination has come to light with the recent publication of a book called "The Secret Witness" by 88 year old Paul Landis, who at the time of the incident was a Secret Service agent in the car behind the President's.
https://www.dailymail.co.uk/news/article-12586167/JFK-proof-two-shooters-secret-service-agent.html
That article reminded me of a analogous incident much further back in time, about which I recently learned more when reading a book called "William Rufus" by Frank Barlow, published by Yale UP (2000).
https://www.amazon.com/William-Rufus-English-Monarchs-Barlow/dp/0300082916/
He sounds like a fascinating character, and in some ways quite modern in outlook. But so repellent did his attitudes and behavior seem to contempory historians, who were mostly clerics, and many since, that he's had a "bad press".
Barlow devotes a chapter to the event of his reign, as William II (1087-1100), best remembered today: His mysterious assassination in the New Forest, in the year 1100. In relation to this, he includes several facts recorded at the time, but makes no attempt to identify a culprit from among the many suspects.
As luck would have it, I found an ebook copy of his book on a Russian website a month or two ago. (Those naughty copyright-violating Russians have literally tens of millions of ebooks stashed away, many bang up to date, if you know where to look!)
When I reread the book, the available facts pointed to a clear prime suspect for the killing. My conclusion, for reasons briefly summarised below (if anyone cares much), is that there weren't two assassins, as possibly in JFK's case, or even one. I believe the most likely truth, based on all the facts that can be known today, is that the silly sod accidently killed himself!
The first fact is that he was killed while out hunting. Now there were various kinds of hunting, and on that day it was not the kind of hectic style, with packs of hounds, and riders charging about blowing horns. It was a stealth mode deer hunt, in which the participants spaced themselves individually widely apart throughout a forest area and waited for deer to gallop past, which they would try and bag with arrow shots.
Normally on a hunting day the participants would head off, keen as mustard, literally at the crack of dawn. Apart from anything else, it might be several miles from their overnight lodging to the hunting ground. But we are told that on the morning in question they didn't start until after midday. One historian claimed this was because the King had drunk more than usual the night before and had a hangover.
Another chronicler mentioned in passing that a blacksmith arrived at around midday and delivered six arrows, of which the king kept four and gave two to a sidekick called Walter Tyrrell. Although apparently a trivial aside, hardly worthy of mention, this fact may be a key to the mystery!
So in summary, at the start of the fatal day we have a king who may be a bit woozy from the night before, and thus not fit to operate heavy machinery, or any machinery, including new-fangled cross-bows.
Perhaps it was not a hangover which delayed the start of the hunt. Maybe the chronicler assumed that was the reason for the delay . To my mind, another obvious possibility was that they were waiting for something. From the facts recorded, that was most likely the arrows which the blacksmith was due to deliver.
Now imagine a Texan billionaire inviting his rich pals on a hunting trip. On the day, they all have to wait for a gunsmith who eventually turns up with a mere six bullets. Sounds ridiculous doesn't it? They would have crate loads of ammo, and it would have been the same with arrows on that fateful morning.
They must have had ample supplies of arrows, with normal heads, and (broad bladed) hunting heads. So the arrows the blacksmith delivered must have been very special, and I suspect they were crossbow bolts. According to Wikipedia, crossbows were only reintroduced into Europe at around this time. So they were cutting-edge technology, doubtless with various models regularly appearing, as with any new technology.
Standing nearest to the King during the hunt was Walter Tyrrell, some hundred yards away. No participant would have had much of a clear view of any others, especially as they would all have been trying to look inconspicuous so the deer would not be deterred. For the same reason, they wouldn't have wanted any attendants or servants nearby.
Immediately after the killing, Tyrrell hoofed it to France, in the not unreasonable belief that he would be blamed. But for the rest of his days, including on his deathbed, he swore by the blood of Christ that he was not responsible for the fatal shot, and most people took their religious oaths very seriously back then, especially when they were about to meet their maker!
So in summary, I believe the king was fumbling to load or reload his crossbow, and possibly turned it upside down, so he could push the bowstring down with his foot (if it was an early model without a windlass, so he would have had to pull the bow string back by hand). Then he spotted a deer, and in the heat of the moment, he nocked a bolt while the bow was still propped on the ground pointing up at him, and the rest is history ..
Reality is not a mystery novel where you're carefully presented with a minimal set of facts designed to lead to the correct conclusion. It's not necessarily the case that the facts known are relevant or even correct, or that the mystery is actually solvable.
My favorite JFK assassination theory is similar. It's goes that the Secret Service killed JFK, but it was an accidental discharge that they then covered up. I wonder how many of the great mysterious murders of history were truly accidents that no one believed.
Sounds a bit of a long shot (literally!), but who knows? One thing for sure is that after an "official" accident which there is any chance of concealing, reputation management goes into overdrive.
In the William Rufus case, for example, if the arrow in his chest was obviously a cross-bow bolt, and Tyrrell was believed and thus ruled out, so an accident along the lines I sketched was suspected, then to preserve the royal family's dignity the official story may have been similar to that related by William of Malmesbury (see ZumBeispiel's reply below).
Luckily there happened to be a known local loony hiding in a nearby building with a sniper rifle so they were able to blame the whole thing on him. Lucky!
Nah; they put him up there to cover for the accident they knew was waiting to happen. The second shooter just happened to be in the right place at the right time-total coincidence.
JFK conspiracies crack me up
To add to the improbability:
The 5 dimensional chess move was to conceal the accident in order to create a cottage industry of conspiracy theories in order to stimulate the economy. :-)
[lizardmen involvement as an extra cost gourmet exclusive option]
It was the culmination of a plot hatched in 500 CE; that’s the dirty little secret no one is on to. Subscribe to my newsletter for more.
When is Assassin's Creed: Dallas coming out?
Does this cover the predictions and conspiracy theories encoded in the cave painting in El Castillo? https://www.oldest.org/culture/archaeological-sites/ :-) ( Can a plot incubate for 40,800 years before hatching? :-) )
“Favorite” as in most outlandish and amusing? Or favorite as in you actually think that’s what happened...?
Bit of both. Or maybe the most narratively satisfying is the better way of saying it. So much ink has been spilled trying to find a second gunman, and pulling together a grand conspiracy to explain why it all happened. It would be ironic if in the end it all boiled down to bad trigger discipline and desperate ass covering.
I mean, yeah? By a pretty large margin. I don’t know what to believe in terms of conspiracies, but when the leader of the free world gets his head blown off in such an event, I think it’s safe to say *somebody* wanted him dead.
“Oops, my finger slipped?” Get real.
By sheerest coincidence I just finished reading "Day of the Arrow," a horror-thriller set in a rural French district where, even into the last half of the twentieth century, the peasants (who are secretly in the grip of a Mithran cult) ritually murder the local nobleman when the crops fail. Traditionally, this is done under the guise of the murder being a "hunting accident." One of the conspirators (who are planning to do away with the current marquis) tells the protagonist that this is the real story behind the death of William Rufus.
I saw the movie version of this on TV years ago; it was called "Eye of the Devil" with Deborah Kerr and David Niven as the leads:
https://en.wikipedia.org/wiki/Eye_of_the_Devil
Heh, it was that movie that got me to read the book. I saw it was on TCM On Demand a few weeks ago, and it looked interesting. But after finding out it was based on a book, I decided to read the book before watching the movie. But then the movie left TCM before I finished the book!
The film seems to have a middling reputation, and judging by the plot synopsis it has been much simplified. Apparently in the movie it's the wife who is actively trying to save her husband? In the book, it's an old friend of the marquis who is the active protagonist -- and whose motives are, um, complicated by the fact that he is also in love with the marquis's wife.
The book, at least, does a quite a good job of conveying an air of sinister but sun-lit malice. Everything is bright and out in the open, and the "creep" is conducted at a slow burn. (Maybe too much so: rural horror is now so familiar that the conspiracy is visible twenty kilometers off to the initiated, and so the protagonist can seem pretty dim for not putting the clues together.) I'd recommend it to lovers of the James boys: M. R. and Henry.
Very interesting! It's fun to compare the Wikipedia articles about William II. in different languages. Dutch Wikipedia blames it on Henry I., is successor. Most others blame Walter Tirel. Only the German Wikipedia proposes it was an accident and cites William of Malmesbury as follows:
The day before the king died, he dreamed of being in heaven. He suddenly woke up. He ordered that light be brought and forbade his servants to leave him. The next day they went into the woods... He was accompanied by a few people... Walter Tirel stayed with him while the others pursued the prey. The sun was already setting when the king, cocking his bow and letting an arrow fly, slightly wounded a deer that jumped past him... the deer ran on... the king pursued him for a long time, raising his hands around his eyes from the sun's rays to shade. At that moment, Walter decided to shoot another animal. Oh, good God! The arrow pierced the king's chest.
When he was hit, the king did not say a word, but broke the shaft of the arrow where it protruded from the body... This hastened his death. Walter immediately came running, but when he saw him unconscious, he jumped on his horse and quickly fled.
Hmm, yes his younger brother Henry (who succeeded him as Henry I) was among the hunting party, and he or someone to benefit by his succession are certainly among the suspects.
But I would say his elder brother Robert was a stronger one (or, again, one of his adherents, with or without Robert's knowledge). He had been Duke of Normandy, but had "pawned" it to William to obtain funds for his participation in the First Crusade. He was due back in a week's time, having doubtless spent most of the money on the adventure. So he would have been keen to see William eliminated, to make it more likely he could regain possession of Normandy without having to repay the loan.
The trouble with the contemporary historical accounts is that some are contradictory in certain aspects, and other authors literally made up things or copied them from each other or from earlier accounts of similar occurrences. Often they were more interested in making moral points than relating the facts.
26 year old female literature and history teacher who enjoys Bach and Mahler looking for an older male partner who also enjoys classical music, wants marriage and children in a few years, and is oriented towards technical understanding and good taste
lilyreadsyouremail@gmail.com
Of all qualities, why classical music? Not aggressive, just curious
what do you mean by old partner? can you share the desired age group of potential match?
between 27 and 40 would be ideal
Regarding your previous posts, are you still in Australia? That seems relevant.
yes, but willing to relocate for the right person
Pokemon Gen 2 was based on the Kansai region of Japan, which among other things, contains Nara, a city famous for its deer parks, where deer roam around the city and visitors buy special crackers to feed them. Pokemon Gen 2 also introduced Stantler, the first deer pokemon. Unfortunately, it's just a random wild pokemon on Route 36/37. They really missed an opportunity to have a fictional analog of Nara there. They could have made it like the Safari Zone where you feed crackers to Stantler in order to catch them.
I mean, there's the National Park where you catch bug pokemon and the catching contest? It seems kind of similar, and that it's about nature and there's like a special things you buy to interact with the animals there in the park
They do have Slowpoke roaming throughout Azalea Town at least.
I have recently had some work experiences that give me some insight into one of the potential reasons why so much of modern architecture is so ugly: because the way we build buildings these days requires architects to precisely specify minute details of every aspect of the building on computer generated 2D architectural drawings.
I work for an architectural lighting company, and recently I have been asked to start making production drawings for our orders. Sometimes the customer is fairly clear and this is easy, but not always. Today I finished up a project that took 20-30 hours, where I had to try to interpret the architectural drawings for a building to get the details of what the customer needed to order, and how we needed to build the lights to satisfy their need. There was such a staggering amount of data on these drawings. I was extremely fortunate to have had a copy of the drawings which the architect had helpfully highlighted all of the locations I needed to scrutinize, and only received the pages relevant to my needs, though I could tell from the page numbering and table of contents that there were over a hundred pages in the whole document. This is a staggering amount of work to produce, and frankly not a particularly great way to convey all the information necessary to build this building.
For instance, some of the most difficult lights to interpret were the ones on the stairs. This building had a set of stairs with a super common arrangement, where you go up half a flight, turn 180 degrees, go up the rest of the flight, then repeat. They wanted linear lights on the underside of the stairs. But despite having multiple views of each set of stairs, the only way I was able to figure out that they actually wanted u-shaped runs on the bottom of each set of stairs was the hand-made drawings somebody at some point higher in the process than me had produced at some point, and which were included in the information packet I was given. These kinds of stairs are simple and common, and yet with what is essentially a square spiral shape, not that easily depicted in a series of 2D drawings. Especially when those 2D drawings include not only the lights I am trying to specify, but also all the structural elements and trim and flashing and every tiny little detail. It is just so much to go through.
Hundreds of years ago, when a team of builders built a cathedral, there is no way they would have specified all the minute details like this, especially for something light lighting. They would build the structure of the building, with some plans for how different parts of the structure would be illuminated. Then, when it was time to finish the interior, the aristocrat in charge of the project would walk through the space with a head craftsman and discuss the broad goals of how the illumination sources would be arranged, and then the head craftsman would work with a team of skilled artisans to build and install the lamps and other fixtures in situ. The important point being that the small little details would be left up to the skilled artisans responsible for the labor of manufacturing and installing the fixtures.
My company COULD do things this way too, if the world was set up to operate this way. Our products are highly customizable and not terribly complicated. We could have sent out a team of a few skilled artisans in a truck with a nice portable mitering saw and a pile of the materials we build our fixtures from, and they could have built everything on site exactly to fit the space, with only vague direction from the architect about what needs to go where, and what kind of style and illumination they want in each location. I think this would be cheaper and take much less time overall then the way we currently do it, where we spend many hours of time with customer service and reps and everyone going back and forth again and again on exactly what is needed. Our products aren’t difficult to assemble, and don’t require heavy machinery. They could be assembled in the field. And this would mean we wouldn’t need to spend a long amount of time carefully packing them for shipping, which is difficult and expensive given that our standard size fixture is 8’ long.
But this isn’t what the customer thinks they want. They want a highly customizable pre-made product that they can slip into place at with unskilled laborers. We have a ton of problems with our products being installed incorrectly, which just adds more time and back-and-forth, and shipping broken products back-and-forth for repairs and adjustments and replacements. And it requires we build our products robustly enough to be installed by laborers who we know will damage them, and robust enough to be shipped without breaking. It all feels incredibly wasteful and unnecessary to me. But this is how builders and architects expect things to work.
And 2D drawings… really? Can’t you just give me a 3D model of the building? No, of course not, that would violate somebody’s intellectual property. That or the architectural drawing software the architect uses won’t give us a license to a reader for those files.
Point is: the way we build buildings these days is with the expectation that every single minute little detail is fully specified in drawings before construction begins. This requires a tremendous amount of effort to plan out, and generates a tremendous amount of data that is difficult to efficiently convey. And of course, standard features are much easier to draw/design with architectural software than some complex, novel artistic concept. And so architects and designers feel this pressure to keep repeating the same patterns that are easy to draw again and again, which is why so much of modern architecture is boring, ugly, and similar.
Nice. I really enjoyed this. I wrote a piece for Planetizen where I highlight the role of having to describe things in words. I think it complements your point about having to describe things in diagrams. Both make it difficult to rely on tacit knowledge, and a lot of what is beautiful depends on tacit knowledge.
https://www.planetizen.com/features/116257-where-words-fail-teach-architects-and-urban-designers-violinists
I've long suspected the same thing about McMansion roof lines (especially the nubs).
Visual aid for those unfamiliar: https://www.youtube.com/watch?v=YX3G1r3ynfw
The auto-generated roof in most CAD software defaults to each wall getting a slope. It takes an extra 10-15 clicks in Revit to convert the default four-sided roof into a gable roof. It's too easy to draw a crazy floorplan and then just let the computer calculate all the weird rafter angles for you. If you had to draw all that by hand, you'd be much more inclined to keep the outer walls to a simple rectangle or L shape.
Ooh finally a comment that touches on an area of my expertise. I work in Engineering Consulting and am one of the people responsible for a subset of the drawings you are probably looking at. I'm an electrical engineer and I specialize in lighting, as a matter of fact. Construction Drawings are the way you've described, in my opinion, because of what we in the business call CYA, "Cover Your Ass". In summary, the structure of design contracts incentivizes over-specification from all of the design teams because if during construction something needs to be changed due to a miscommunication or omission on the drawings the cost of those changes will be borne not by the contractor or the owner but by the designer. Design firms stay afloat by completing a significant volume of work, completing it quickly, and avoiding change orders. There's also the matter that construction permits are issued based on drawings before work begins and the authority having jurisdiction has a legal responsibility to verify the project is designed to be code compliant so there's another reason drawings need all this information on them. I think that standardized drafting definitely has some influence on how any building project ends up looking but I think it's a pretty weak influence and I don't think it's actually the reason for current style since that style predates the advent of the mass adoption of CAD. In fact, if you think about it CAD ought to make it easier to produce more adornment! Drawings details and specifications can now be easily shared between manufacturers, vendors, and designers and can be easily reproduced instead of needing to be redrawn by hand. It's not what's easy to draw that creates the style.
As for 3D models, a lot of 2D elevation and plan drawings are generated from a 3D model! Autodesk Revit is the industry standard. All of the building systems are laid out in 3 dimensions first and then exported into 2D drawings for contractors/ record keeping. In my experience, and since the vast majority of deliverables remain 2D drawings, these models tend to only be shared internally during design and the level of sophistication in the models, though it varies, is usually low.
Anyway, if you want architects to do more adornment or complex designs there's something you specifically can do! Encourage your company to produce and share detail drawings of the kind of lighting applications you'd hope to see so that's it's easier for architects to put them on projects. The stair example you've shared is a great case for that because like you said it's a very standard stair pattern. This might not lead to more neoclassical buildings but it could lead to more interesting and beautiful spaces unique to our own time.
Lastly, my take on this artisanal approach your describing is that it's something that couldn't really ever be recreated at scale now as the contemporary social, economic, and legal reality of construction is extremely caustic to it.
Good points, thanks for the input. I see a lot of change orders happening between the designer and my company. This particular project has been going back and fourth (fortunately not that many times) for almost a year. Other projects I have seen have 5+ revision cycles. I think a lot of this could be solved by making it on site once the installation environment is largely completed, like I described, though of course I understand there are lots of practical reasons why that can't work in today's culture and climate.
CYA is certainly an important and unavoidable factor that influences our designs and products as well.
I would like to be able to get access to the CAD models for a couple reasons. First, it is easier for me to interpret, even if they are very complicated already. I make my own drawings from within CAD and do a lot of design work as well, so I have a lot of experience with it. But more importantly, being able to directly measure design features out of a live rather than a PDF would really help clarify certain things (though this could be done with 2D drawings as well). For example, with these U-shaped runs under the stairs, I don’t really know whether the dimension between the legs of the U is center-center, inside-inside, or outside-outside. I can imagine different people in different situations using any of the three. And it does matter if we are cutting components to fit within an 1/8” tolerance, as is our standard, given that our lights are several inches wide. Did whoever was drawing this properly compensate for the width of the fixture itself. It makes me particularly nervous for some of our custom bent/curved pieces, where the radius the drawing provides might be to the inside or outside of the fixture body depending on how it is mounted and whether or not it is recessed. I usually just take my best guess and go with it. Though then again, maybe it is still to worry about an error of a few inches in a curve with a radius of 50+ feet… the fixture if flexible enough to accommodate that amount of error.
I totally agree with you that it would be great if the company could release lots of models. I am a big fan of open-sourcing stuff in general. Good luck though – my boss is very paranoid and secretive.
On a related note, I have some pretty neat optical design capabilities and some ideas for how to build better LED light engines. The company I currently work for has a lot of heart, but is ultimately too small to really take advantage of what I have to offer, and can’t afford to do the R&D required to develop my designs. So I am looking to find a new employer with more resources, and even half-seriously considering trying to launch a startup. Is that something you would be interested in talking about? Anybody at a large lighting manufacturer (Acuity, Phillips, etc) you could potentially refer me to?
Definitely I'd be interested in talking more about light engines and fixtures! I have some contacts in the industry but nothing at any of the big manufacturers like Acuity, yet. I recently moved to NYC and I'm hoping I can parley that into more networking opportunities specifically within lighting. Sorry for the delayed response to your comment, the holiday interrupts everything, but if you see this response I'd be happy to keep chatting with you about this stuff
opticsol dot eric at gmail.
Awesome description, thanks for sharing !
I remember reading about a way that modern building where specifically weak to after-the-fact customization : office buildings floors are made with a thinner concrete layer that only holds because it is under tension but will break if someone tries to drill a new hole in it (similar to safety glass).
Do you have other examples of practices that make later modifications of a building harder ?
Slight correction – you’re probably talking about post-tensioning here. Very simplified explanation: concrete is terribly weak in tension but fairly strong in compression, and when a bending moment is shown in the middle of a member the top half is in compression (good) and the bottom half is in tension (bad).
Traditionally, the bottom tension forces are taken purely by rebar, which means that the bottom half of the concrete is doing very little (thermal mass etc. is still useful), but if you run steel cables through the beam then tighten them after the concrete has cured, much more of the concrete is in compression and you can get away with longer spans and/or thinner slabs.
No issues with drilling through the slab as long as you avoid the post-tension cables, so biggest impact is that a hole may not go exactly where you wantit. Following website recommends not drilling into PT slab, but even if you get some idiot who starts drilling without checking for cable locations, you’re not going to damage the cables with standard concrete cutting tools (although if they’re an idiot, they may also start drilling with diamond drill bits so don’t hire an idiot).
https://www.concretenetwork.com/post-tension/basics.html
I mean, one clear example that people hundreds of years ago didn't have to deal with is routing wires. Much much easier to install all conduit for wiring before the walls are completed. That alone is a good enough reason why my proposal for modern day artisinal lighting wouldn't really work out. People didn't need to install conduit for candles / lanterns hundreds of years ago.
I believe ugly/overly simple buildings were common before computers were in that much use.
Sounds like a good case for a VR app, where the user (in this case you) could walk or float round a virtual image of a building, using a joystick, and highlight and adjust various aspects which would be invisible in real life, such as temporarily making the walls almost invisible so you could see the wiring and pipework, etc. You could even be joined by the architect, as in a multi-player game, and collaboratively clarify things such as this stairway lighting. It's a damned sight cheaper to make adjustments with just electrons than with real materials!
Comment of the week here (speaking from my experience as a washed up architecture major)
See, if you live **inside** one of them, then you don't have to look at it!
(That's apparently how I handle architecture I don't like, anyway.)
Garish neoclassical? Do you mean historically accurate painted classical statues?
...many of which are not necessarily historically *accurate*, because reconstruction is more like "we were able to identify presence of these color pigments here and there, so we paint it by numbers" and less like "a masterpiece paintwork, similarly lifelike as the sculpture beneath created by equally well-skilled artisan sculptors and painters"
Good point. I've seen a suggestion that the actual artists would have done better than the reconstructions.
I think it's fairly obvious they would. The reconstructions are garish and awful.
Has anyone tried to present a project showing what a famous "white" statue would look like if it was painted in lifelike colors by an actually skilled painter?
I was assuming that neoclassical buildings would come with statues.
Regarding your language learning proposal from a while back, I think English -> Japanese is one of the *worst* examples you could have chosen. You could kind of do what you propose for closely related languages like Dutch or German where the word patterns closely match English (though even then, good luck explaining gender), but English and Japanese are just so utterly different that Mad Libs study makes no sense.
I would say the opposite. I tried to do it with a French sentence, but it was boring because the grammar matches so closely that most of the steps where identical. The proposed learning method is designed to help you learn a weird grammar by exposing your brain to different word orders.
But grammar is about WAY more than just different word orders. Again, with French or German, you can more or less do that (if you ignore gender and other inconvenient details). But Japanese grammar isn't just a permutation of English. All of the *concepts* and *building blocks* are completely different. You can't just map 1:1 between them mad libs style.
As I understand it, Scott wasn't claiming that the grammar is just a permutation. He was looking for a way for adults to "learn by osmosis", the way children learn when they have no other choice, but which is hard for adults because we keep wanting to use the language tools that we already have. The idea, as I see it, is that we'd eventually start to feel that it was "natural" for certain things to be phrased in certain ways in Japanese. And this goes from things like "these two grammatical formations appear the same in English but are different in Japanese" (and vice versa) to "these two concepts use the same word in English but are different in Japanese" (and vice versa). And eventually we'd fine-tune into all the little nuances that are hard to fit into grammar books and dictionaries.
It's not pattern matching or belonging to the same family. You'd generally want a language without much inflection, one that's highly analytic. English is a relatively analytic language (one of the most analytic in the Indo-European family) but most East Asian and Southeast Asian languages are analytic. As are many in West Africa. The fact that Korean and Japanese are highly synthetic is one reason some people argue they're related to Turkic or Mongolic languages (which are also highly inflected).
This is one thing that frustrated me about non-PIE languages. I was used to languages being more synthetic than English but many are more analytic. And the philology is often not all that advanced and most philologists deal with highly synthetic languages meaning a lot of the tools are inapplicable. Quite annoying.
Anyone have advice about reverse mortgages? We're going to outlive our retirement funds if we don't do something and that looks like an option. We have substantial equity in our home in a high cost of living area, don't want to move, and don't need to worry about leaving any inheritance.
Hey, someone pointed me here to give my take. So reverse mortgages are generally bad deals. While it depends on the exact terms you're generally selling your home equity at about a 50% discount or less. For example, if you're 60 years old and live to be 80 and have a house worth $500,000 then at current rates you'll get about $1,000 a month. Over the 20 years that's $240,000 after which they get the house, sell it, and make a cool $260,000. And this is without taking into account discount rates. You'd have to live well past 100 for it to make sense for you.
If you want to access your home equity then getting a cash out second mortgage at basic mortgage rates (8%) or a HELOC (10%) which allow you to access about 80% of your home equity. The advantage of a HELOC is that you don't have to take out the entire amount, you can charge things to it like a credit card. You can then take the same $1,000 a month out if you have a little discipline. In the same scenario as above what would happen is that at your death the house would get sold, the bank would take the $240,000 you owe (plus any interest), and the rest would go to your heirs.
Of course, the danger there is that you can run out of money while a reverse mortgage lasts for as long as you remain in the house. But keep in mind it is as long as you remain in the house. You lose that equity if you move or get put in a nursing home or even get hospitalized for long enough.
Also, since we're generally dealing with relatively small monthly incomes, it's pretty easy to get a similar amount from a sidehustle. Which would be my first recommendation. If you really just need an extra $1,000 a month and you have free time (as most retirees do) it's not too hard to find something. Often from home and low stress.
This may be too late, but just in case ...
I've been doing some reading on retirement planning lately. Most recently the Retirement Planning Guidebook by Wade Pfau. Like one of the other people who've replied, I'd also heard bad things about reverse mortgages, but the book has me reconsidering. It sounds like they were cleaned up a lot since they first became available. I believe he also has a separate book completely dedicated to reverse mortgages which, presumably, has more details.
I don't know about US law, but in Europe at least there's also the option of selling the "naked property rights" while retaining a lifelong right to use it.
Some internet research tells me "naked property rights" is a thing in the US too. But how do I find a buyer?
Yeah, at least you'd want to sell to a major US or EU bank, not to some random individual who might have mafia connections. Financial institutions can have a bad reputation but they're also risk averse enough to avoid killing people for a few thousand K$.
I've heard pretty bad things. Maybe consider moving to a lower cost of living area first?
On OpenAI developments-
What an absolutely wild ride this continues to be... almost any option seems to still be on the table, including OpenAI's board stepping down and their successors reinstating Altman (good, but fraught without a clear account of the motivations and supposed reasons for his firing), the acquisition or acqui-hiring of OpenAI by Microsoft (bad short-term, ok-to-good-medium-term, probably bad long-term), or the OpenAI board staying the course and hoping the company isn't a ghost-town by next week under Emmett Shear as CEO (worst, no knock on Shear, but this is the bad-end outcome that likely results in bargain-bin acquisition by Microsoft with serious losses of employee retention and major interruptions in development and service).
These options and more are largely all in play, and Microsoft wins big in almost any scenario. This may be why Microsoft CEO Satya Nadella has been so magnanimous in keeping these many paths open, characterizing Microsoft's hiring of Sam Altman and Greg Brockman as a "holding action", and committing to continued support and partnership with OpenAI regardless of how things shake out. Microsoft currently has the ability to essentially end OpenAI as a solvent company right now, but Nadella has (imho) shown a great deal of leadership and pro-cooperation tendencies when the chips are down, at the present juncture... maybe in part because there are very few paths where Microsoft doesn't come out on top in this shake-up.
I reject any interpretation that some of the decision-makers here were "playing 5-D chess" or planned for any of this... there's simply too much variability in highly stochastic systems (such as human choice). Rather, the arc of this entire story has been characterized by extremely reactive decisions where the likely consequences weren't thought out or well-considered, with the outcomes of those decisions spiraling quickly into chaotic no-win scenarios. As usual, the winners here aren't those who had some kind of "grand master plan", or even expected the players to respond rationally according to their incentives and self-interest... but those who could respond quickly, effectively, and cooperatively to events where decision-makers acted in irrational and self-damaging ways, while also leaving opportunities open for "saving face" and not rubbing salt into the wounds of any perceived vulnerability.
I applaud Satya Nadella, Sam Altman, and Greg Brockman... this has been a master-class in damage control and applied game theory, in many ways... as well as Ilya Sutskever for admitting when he was wrong, taking accountability for his choices, and course-correcting. None of those things are easy or natural, and it speaks to the professionality of all involved that Altman and Brockman responded very positively to his contrition in the face of what must have felt like a massive betrayal by Sutskever.
I await any further developments just as everyone else is.
Well, Sam Alman is back in as CEO of OpenAI, with the majority of the board of directors stepping down. This is probably the best outcome that could have been hoped for under the circumstances!
It is somewhat concerning to me that Adam D'Angelo, the CEO of Quora, retained his seat on the board under the new regime. Despite D'Angelo's vested interest in Poe, his AI chat company, it seems to me that Quora has a direct conflict of interest with OpenAI, unless it pivots its business model significantly.
Ilya Sutskever is also off the board now, though it sounds like he will remain as OpenAI's chief scientist... signs look good that Altman and Brockman aren't harboring any vindictive feelings toward him, and that he'll be welcomed back into the fold. Still, a misstep like this will rightfully be a setback for Sutskever, and likely means that he will not hold a governance position at OpenAI or at any other tech firm in the future. A displayed lack of loyalty is very difficult to get past for anyone, and a broken trust is hard to make whole again.
What a crazy week this has been!
Your comment would be more interesting if you had spelled out why you seem to favour the opposite sorts of outcomes to most people in this and related spaces.
Yours would be a lot more interesting if it spelled out any basis or merit for your claim, instead of merely slinging accusations and being antagonistic for no reason?
I'm not sure I have the same opinion as Jack, but let me explicitly ask what I am curious about, and might be the same thing. I think this is one of the worst scenarios since the Board Members who cared about AI Safety, EA, and not killing everyone on the planet look like they would be removed. The best option seemed to be that Open AI stayed where it was as an AI non-profit that cared about AI safety and wanted to prevent it from killing us. Yes, Microsoft was likely to poach some of the talent, including Altman, but ideally enough would be left with Open AI to still do significant AI Safety research.
I am not the most informed AI person, so I am wondering where I might be making a mistake. Do you think my analysis is wrong and why do you think this was the Best Case Scenario?
I welcome the chance to talk about this a bit, and I feel more than a little regret that I flubbed it in this thread with Jack... when things seem really vague, I guess I default to assuming some amount of malice is involved. Whoops.
To be clear about my assumptions and priorities, I would say I'm very motivated by AI-safety, but not quite all the way in the "doomer"-camp so far... I think that the outcomes of AI development are going to depend a lot more on the actual design and the specific ground-up architecture of these systems, much moreso than the business policies, intentions, and governance of the companies working on AI today. I expect there to be *significant* changes in the industry in the next 10 years, and I don't expect true AGI or singularity-like events until 2040-ish (this week's news made me revise my estimate to 2045, but I'm updating back to ~2040 after Altman's reinstatement).
On the OpenAI front, the best outcome would have absolutely been the status-quo a week ago... a for-profit subsidiary governed by a non-profit board with members who take AI-safety very seriously. Now, it looks like that board is going to become far more like other Silicon Valley tech companies... motivated by iteration and disruption, quick dev-to-market delivery, and quarter-over-quarter ROI. This is very bad, actually! The reason I say that it's the best outcome, given the circumstances, is that almost every other path looked very likely to result in full acquisition or acqui-hiring by Microsoft... or (probably even worse) Alphabet or Meta. Remember that a significant portion of Microsoft's investment in OpenAI has been providing servers and GPU infrastructure... they could have easily pulled the plug or strong-armed OpenAI in this situation (there's nothing "safer" than an AI company that doesn't have access to any compute), but that's not at all what Nadella did. OpenAI maintaining independence as a company, and (for now) continuing the non-profit/capped profit subsidiary corporate structure with Altman at the helm is the closest thing to the status-quo that I thought could come out of this, and I do think it's a qualified win.
Other moving pieces here are that Musk famously bowed out/was forced out of OpenAI because he thought Altman wasn't prioritizing safety and transparency enough... so it is reasonable to assume that the same issues are sort of coming back for round 2, except that members of the outgoing board specifically said that their decision to fire Altman didn't have anything to do with AI safety (not that they offered any transparency on their *actual* reasons). Also, Anthropic broke with OpenAI for very similar AI-safety reasons... and Yudkowsky et al. have been doing excellent work at MIRI on AI-alignment for quite a few years (only for EY to essentially throw up his hands and declare the problem unsolvable with the current resources and timeline). If we manage to create human-level or superintelligent AI that doesn't kill us all as its first order of business, I don't believe it will be because we managed to create a mathematical proof of ethics to bind it with, or because the board of directors of OpenAI had the right composition of thoughtful, well-intentioned people guiding it through the end of 2023... I expect it will come after quite a few massive failures and successes in technology, societal adaptation, and systems integration between many of the advances that OpenAI has made (and will hopefully continue to make) and some others that are still very much on the horizon. When we get there, I expect the landscape to look very different from the map that we're using now... and I think the best chance that we have of getting a map that updates and responds to new developments quickly and anything resembling accurately is for several more iterations of AI technology to be very visible and undeniably apparent (instead of something that gets developed behind closed doors and ends up benefiting only those who are strategically invested in it). I don't believe the political will or public understanding will reach a point where we can marshal appropriate resources until it is very obvious how real the problem is, and how seriously it needs to be taken. Right now, Sam Altman seems to me to be the person best positioned to lead OpenAI in a direction that gets to that point with a reasonable balance of safety and practicality... he's been one of the primary people in the industry pushing for the democratization of AI, and while I find his safety strategy of "we'll keep moving forward until it becomes obvious we need to pause" more than a little concerning, I drastically prefer that to a timeline where it looks like nothing is happening for another 15 years, with opaque developments that are only used internally at large corporations... and then everything changes overnight.
It is not obvious to me that *time* is the only, or even primary resource for solving the AI-alignment problem. Nor is money, nor creativity, nor even intelligence, in a vacuum. It will take all of these resources and more... and I think a world where these technologies are available and accessible is one that is slightly more likely to be the one where our species survives. A longer runway would be fantastic, but it is not obvious whether significant progress can be made on the theoretical front in the time that is given to us... if there are reasons to believe otherwise, I'm all ears.
To be completely transparent myself, I am not the most informed person about many things, including AI... and much of what I'm saying here really is pure speculation. But that's how I see it... sorry for the essay, but let me know if you think I've made a mistake, or if I'm getting anything obvious wrong. I also absolutely reserve the right to change my opinion here if new information becomes available... it has been frustrating (I think for many of us) to watch this story unfold this week with crucial information like the actual reasons for Altman's firing withheld... and I'm operating in the dark just like everyone is.
I've read my comment back a few times, now, and I still have no idea what "accusation" I am supposed to have "slung". I was asking an (admittedly implicit) clarifying question, and politely enough, I thought.
I suppose the 'would be more interesting' framing is unnecessarily negative in tone. But is it really so rude to suggest that your comment was not *maximally* interesting?
In any case, that suggestion clearly has caused offence, so it seems prudent to move on without the clarification I was hoping for.
"You seem to favour the opposite sorts of outcomes to most people in this and related spaces"
This is akin to a "no true Scotsman" argument, and tacitly accuses out-group alignment.
Rather than saying, "hey explain to me why you're different from everybody else", especially when I'm not aware of a difference (nor of monolithic agreement here or elsewhere on many AI matters), try using specific details when you're asking your question. You might say, "Hey, I notice that you are being complementary of Nadella, but I myself have a lower opinion of him for these reasons..." Idk, I don't know what your actual perspective is, or what "outcomes" you believe "most people in this and related spaces" predict... mostly because you didn't state anything like this... you just somehow jumped to the conclusion that I disagree with other people here, without supporting evidence... which doesn't really give me anything I can respond to. Am I supposed to guess at the ways you think I differ? Will you let me know when I land on the one you had in mind?
> This is akin to a "no true Scotsman" argument
It's really not.
> and tacitly accuses out-group alignment.
Again, I promise you it was merely an attempt at clarification.
My comment specifically referred to "outcomes", so it should be clear that I was referring to the section of your comment where you discussed the desirability of various outcomes. Therein, your ranking of outcomes certainly does seem contrary to the prevailing thought here, insofar as you explicitly prefer what is perceived as the 'AI safety side' to back down/lose, and call the scenario where that faction sticks to their guns (and with the perceived safety-conscious replacement CEO) the "worst" outcome.
I didn't think I needed to justify the idea that "most people in this and related spaces" think differently about AI risk and the OpenAI drama, given that it's being called a 'disaster', and a significant increase in x-risk, by some of the most prominent voices here, and I haven't seen a huge amount of dissent on those points. In fact amusingly, elsewhere in this very thread I've been told that the space lacks diversity of thought on this issue.
So yeah, your sentiments appeared to differ from the norm, here- which I didn't think I needed to say *is not a bad thing*- and I thought it might make for "more interesting" discussion if we could clarify the reasons why. I no longer think there is any prospect of interesting discussion, so I will again attempt to move on.
Christopher Mims recently voiced some suspicions I've had recently regarding smartphones ('Social Media Is Warping Into Old-Fashioned Mass Media,' Wall St. Journal, November 18-19). Is the nearly obsessive use of smartphones healthy.
I don't own a mobile phone, and never have.
It's not that I'm particularly against it, although it has made driving a lot more dangerous. I just don't see the point. I've never played a video game, either. Again, the point?
Is there anyone else out there who hasn't turned into a cyborg?
There are jobs that are almost impossible to do now without a smartphone. My work does all secure logins via a smartphone app.
OTOH I'm not sure if the "Okta dance" (or whatever equivalent method) actually essentially adds to the security, in comparison to the death-by-a-thousand-papercuts annoyance it creates when you have to fumble with your phone every time you want to do stuff (or can't do it if you accidentally left it at home or it's out of battery or malfunctioning some other way - it happens!)
I don’t have an opinion on the security improvements of the Octadance, not my area of expertise. I just do what I’m told by the corporate overlords :)
Also, some restaurants, theme parks, etc. Life is becoming more and more designed around the assumption that everyone has a smartphone with internet access at all times.
Yep. It’s ubiquitous now.
Do not play video games, have not owned at TV for several decades, don't use phone while driving because I don't have a car, just a bike (dont use phone while biking either). On the other hand, I love playing with AI image generators, follow several blogs, and participate pretty energetically on ACX. So I'd say I'm only about half cyborg.
Well the point of videogames is to allow interactions with people without having to interact with, like, REAL people, who would want things from you. They're a massive victory for introverts.
I have not not turned into a cyborg.
Joke's on you, my video games don't have any interaction with people.
Unless I'm playing multiplayer and interact with real people.
I haven't had a chance to read all the comments on the Girard post, but I thought Scott was overly harsh on Girard, especially the last two chapters on political correctness (this was Girard's term). Scott writes: "So Girard is stuck in an awkward position of saying that the rise of concern-for-victims was good when Christianity is doing it, is bad now, and not having any good theory of what changed, or how this relates to the more speculative anthropology." I take Girard to be arguing that what went wrong is that contemporary western culture took the concern for victims from Christianity but then threw away the rest of the moral framework in which it was embedded. That moral framework includes, for example, exhortations to love your enemies and forgive those who persecute you. Take away those things and you end up with a system that is ostensibly concerned with victims but uses that concern to justify the kinds of scapegoating and victimization it's supposed to be against. As for why this changed, this is just Satan reasserting himself within the moral system that threatens his power: using the concern for victims against itself. Of course this doesn't explain why political correctness arose exactly when and how it did, but I don't think Girard is trying to explain specific details of history like that.
> Of course this doesn't explain why political correctness arose exactly when and how it did,
Arguably that was largely because increasingly prosperous societies were running out of genuine internal "third world" problems that people have become hypersensitive to perceived first world problems, analogous to how an insufficiently seriously challenged immune system can become over-sensitive and go haywire, with allergies and so forth, when challenged even mildly.
It's about post-Christian world and the values that have shaped it. After the Enlightenment, they dumped the Christianity but they kept a lot of the moral and ethical values, just deracinated and instead established on a basis of vague "rights".
This led over time to values floating in a void, and something like "compassion for the victim" being made an end in itself, and falling back into the same trap of "we need a scapegoat", except this time - due to the roots in Christianity - it wasn't the ostensible victim who was the scapegoat, but the persecutors and oppressors.
As always, I'd like to point out that these moral and ethical values predate Christianity by a long margin; as, sadly, does the notion of finding a scapegoat. Christianity is only about 2000 years old, and has had 4000 or more years of experience to draw upon and remix. Which is not to say that there's nothing new in Christianity whatsoever; rather, it made many incremental changes -- as did every other religion and ideological movement.
>As if Christians haven't consistently thrown out the moral framework of their own religion.
>People will always find ways to get their own religion to justify anything.
This looks contradictory. Do they throw it out or do they use it? Is this a coherent train of thought or a stream of anticlerical invective commonplace in internet spaces like reddit?
The BlueCrossBlueShield carrier I use got hacked, and what was taken was not just account numbers but also passwords. BCBX paid for all those whose records were exposed to get 2 free years of Experian identity theft protection. So I signed up for that, but am not sure that was a good idea. Experian already knew a shitload of stuff about me. In order to prove I was who I said I was I had to give correct answers to a bunch of questions they asked me about my own finances, such as name of banks I have used in the past, and model of car I bought 5 years ago (how do they even know that? It was a cash transaction between me and previous owner.). And when I signed up for Experian's identity theft monitoring I gave the company a bunch more information about my finances, including numbers, expiration dates and security codes of all my credit cards, numbers of all my bank accounts. So now I'm thinking, so what if Experian gets hacked?
> Experian already knew a shitload of stuff about me.
Don't assume they had (complete) answers for all the questions they were asking. Some of the questions asked as part of the identity check process by credit bureau folk (and background check companies, and others who store lots of private information about you) are meant to fill in gaps in their own knowledge.
The process for signing up combines identity verification with profile building / completion work
I'm pretty confident they knew the info they asked me about in the identify check procerss. There were 5 questions and they were multiple choice, with 3 wrong choices and one correct one about things like the name of banks I'd used in the past. And these questions included *dates*: "In 2018 you opened accounts at Bank of America. Which of the following was the bank you used immediately before that?"
Once I'd answered the questions correctly, then as part of signing up with them I told them a bunch more stuff, like account numbers for 3 different bank accounts and numbers, expiration date and security codes for 2 debit cards and 2 credit cards. If I'd had unpaid loans I'd have had to give them data about that, too, but I don't have any.
Unless you're currently in the process of apply for a loan, you should freeze your credit with the three credit bureaus to make it significantly harder for someone to take out a line of credit under your name and SSN.
It's not an absolutely perfect protection - theoretically someone can hack into the bureaus and unfreeze your credit - but it will thwart the lazier (and far more common) identity thieves from doing things like opening store credit cards or taking out car loans with your SSN.
https://www.nerdwallet.com/article/finance/how-to-freeze-credit
Well this wont make you feel any better: https://krebsonsecurity.com/2023/11/its-still-easy-for-anyone-to-become-you-at-experian/
> such as name of banks I have used in the past, and model of car I bought 5 years ago
I believe this is used to narrow down all the possibilities of people that could be you. Either same or similar name, or some other similar identifying information. Did they present other options to you to chose from? Likely that someone else with your name matches some set of those other options.
> (how do they even know that? It was a cash transaction between me and previous owner.)
I'd assume from the DMV when the title was transferred or when you registered the car.
> So now I'm thinking, so what if Experian gets hacked?
Well, Equifax (experian competitor) did get hacked https://www.ftc.gov/enforcement/refunds/equifax-data-breach-settlement. Not the most comforting news, but also I havent seen any huge fallout from that hack so maybe a similar hack on experian wouldn't be that bad? I also seem to remember speculation that the equifax hack was by china or a similar state actor so they may not have been interested in you or I when they go the data.
Experian are the ones who set credit ratings; if they get hacked, it's basically hacking everyone who uses a credit card. On the plus side, they're the ones who set credit ratings, so if they get hacked they're in the best position to compensate for that.
They probably know the car model because you registered it to drive it. The payment might be cash but the title transfer is on record.
Has anyone found a good dark chocolate brand that is low on heavy metals? I eat a lot of dark chocolate, like 1/3 of a bar per day, and I'd hate to give it up.
Well this is a new thing I never thought I'd have to worry about.
And it's even dangerous to eat healthy foods because:
"Even if you aren’t a frequent consumer of chocolate, lead and cadmium can still be a concern. It can be found in many other foods—such as sweet potatoes, spinach, and carrots—and small amounts from multiple sources can add up to dangerous levels."
Honestly? I wouldn't worry about it. You eat one whole bar over three days. Oh, the gluttony! You are going to die in the end anyway and you have to die of something. Eat the chocolate, forget the Californian health warnings about "everything will give you cancer".
https://www.consumerreports.org/health/food-safety/lead-and-cadmium-in-dark-chocolate-a8480295550/ this article claims to test them
I worry Altman’s sacking illustrates what I long feared: that the limited influence of AI safety enthusiasts on the world will be burned for negligible impact on AI safety.
Now is not the time.
LLMs reduce AI risk, in the same way calculators reduce AI risk: a person with a calculator is “superintelligent” compared to one without, so the calculators technology raises the bar of how intelligent AIs have to to surpass humanity.
(Of course, LLMs also increase AI risk, in several ways which were discussed to death here. But I expect no one to read these parenthesis! … also, it makes sense to me that exploiting LLMs for all they are worth will reduce AI risk according to the argument above more than it will increase AI risk, because in LLMs at least the initial training objective is reasonably orthogonal to paperclip maximization arguments.)
Hmm?
I think there is little to be done to influence AI safety. There are just too many huge forces pushing things in the direction of rapid development: Competition between the companies, the gigantic sums of money to be made, US fear that China will beat us to the finish line, whatever that is. In my opinion the only thing that would slow things down would be some AI-related catastrophe that is so genuinely alarming that public attitudes shift a lot and even those strongly motivated to develop the technology take heed.
Has anyone here read Tom Holland's *Dominion*? I am starting it now, and definitely intrigued by some of the crossover between Holland's points and the recent "I See Satan Fall Like Lightning" review.
I really enjoyed it. Well written with a compelling and thought provoking thesis and lots of interesting historical anecdotes.
Haven't read it, but have watched the interview about it:
https://www.youtube.com/watch?v=cYkP46aYQIs&list=UULFiNGb4kjorb1XpElFZmvShA&index=18
Please tell me if I'm right about this
OpenAI is a private company. The board has no obligation to tell the PUBLIC in advance or later, that they were going to fire the CEO and why. The board members might have had an obligation to tell the shareholders (such as Microsoft) about this.
So I'm wondering why they're being called "secretive" accusingly. They had no obligation to share this with the public, even in the vague terms they did.
> The board members might have had an obligation to tell the shareholders (such as Microsoft) about this.
OpenAI doesn't actually have "shareholders" in the regular sense. It's a non-profit, and their agreements with Microsoft and others are clear that OpenAI has no obligation to make any profit for anyone, and they don't share any control with them either. See https://openai.com/our-structure
OpenAI is intended to work for the benefit of humanity. It's rather different than an ordinary private company. (That said, I still have no idea who's in the right here overall.)
The diagrams are pretty interesting and instructive. It certainly changes the tenor of the "revolt of employees loyal to Sam Altman" to see a chart which flags how the employees also own shares of the holding company that owns OpenAI Global, LLC, and presumably stand to make boatloads of money if the (alleged) dispute between Altman and the Board over more money vs more safety were to resolve in favor of the former.
It also adds some context to all these references to "capped profit" to click through to the post announcing the structure and see the sentence "returns for our first round of investors are capped at 100x their investment (commensurate with the risks in front of us), and we expect this multiple to be lower for future rounds as we make further progress."
As depicted in the diagrams floating around, OpenAI is actually a family of different entities, some non-profit, some "capped" for-profit.
The board has responsibilities and obligations to various specific groups, and are in a sense stewards of what they "direct". I don't know that they have any specific obligation to do what's good for OpenAI, the limited-profit company, but I'd bet that the company is an asset of some other group (or an asset of an asset, it looks like?) that they do have obligations toward.
For example, I may not have a direct obligation to preserve my child's friend's Lego fortress, but I do have obligations to my child, and smashing that Lego fortress might make my child rebel against me in solidarity with their friend, and thus make it harder for me to do my duty. I should only do something like that if I have a good reason that outweighs the foreseeable consequences, such as if the Lego fortress were also a summoning circle for a primordial evil from beyond space and time.
When the board kept their reasons secret, this caused people to not trust their judgement, and some of those people were most of the employees of OpenAI. The board should have predicted this. Firing a founder is a big thing, firing a charismatic leader is a big thing, and firing the CEO of the world's leading AI company is a big thing. There were always going to be scrutiny and questions, and the board seems unable to respond.
OpenAIs corporate structure is a lot more complicated than just "a private company". See a diagram here: https://www.bloomberg.com/opinion/articles/2023-11-20/who-controls-openai?srnd=undefined&leadSource=reddit_wall
I dont know how it influences the announcement of this decision, but the board is made up of people who would not be on a traditional corporate board and there are only 4 of them not from the C-level team. Thats a very small board.
I think they are being called secretive because the action they took makes very little sense from the outside and even happened while Sam Altman was actively participating in PR activities for the company. Then there was some bad corporate communication from the board which allowed speculation to run wild.
That corporate structure is super complicated!
And many thanks to everyone who responded. You really educated me on this thread.
I think people are throwing around "secretive" because they didn't state plainly "We are giving Sam the Order of the Boot because (he stole the tea money)", it was some vague 'parting of the ways' which just inspired a lot of "what possibly happened???" speculation.
My leading theory at the moment is that the board were idiots.
A legit reason where you might fire someone and then clam up about what they did is when they did something kind of criminal, the board is anticipating that they might be facing criminal charges themselves, and is desperately avoiding saying anything that might be used against them in the anticipated crimson trial. But giving the other hints, I don't think it's that.
FWIW, my suspicion is that the board were worried about the copyright implications of (possibly) Altman's keenness and willingness to slurp practically every word ever published anywhere between 2018 and now, to upgrade OpenAI to v5, just as I gather they did the same with every recorded utterance since the dawn of time to get as far as they have already. But obviously I don't have any inside info, and that is pure conjecture.
Because:
1) they may not have an obligation imposed by the US Government to tell people, but clearly a lot of other stakeholders are imposing an obligation on the board.
2) when *all* the leading theories make the board look bad, nobody should assume there's a good reason just because they haven't specified what stupid reason they actually had.
3) pretty much the only people not to have said anything publicly in the last 3 days are the board members. that is the definition of "secretive".
You can't impose an obligation like that!
They followed the rules when they were "secretive".
I'm sort of playing devil's advocate to make sense of the news.
If you need my dedicated work for your plans to succeed, and I tell you "you must do X or I will quit" and I mean it, then you may not be "obligated" to do X under the law or some abstract theory of ethics, but your enterprise will crash and burn if you don't do X.
Right now, it looks like an awful lot of the people OpenAI will need if it is to succeed, are insisting that the company is "obligated" to explain itself re the Altman firing. Whining about how that's not really "obligated", is irrelevant.
Your logic doesn't make any sense. Following the rules doesn't make them not secretive. It just makes them not rulebreakers
That is not the definition of secretive! They simply don't have an obligation to talk to the public.
That makes no sense. Keeping a secret doesn’t imply that you have an obligation to speak.
They’re declining to give an explanation for dramatic actions which have attracted a great deal of public interest. Of course they don’t have to explain themselves. But it is absolutely notable that they are choosing not to explain themselves.
Like, imagine I did not share a photo of my dinner on Facebook. Does that make it correct to call me "secretive"? I have no obligation to.
Imagine you took a vacation to a foreign country, but refused to tell people where you went, or post any photos that could identify it. Sure, you have no obligation to tell friends or family where you're going, but why wouldn't you, unless it was something you know they would disapprove of? It's more effort to hide it.
If a lot of people really wanted to see your dinner for some odd reason, yes, it would be secretive.
It seems like the EA community has had at least three massive failures this year. Firstly Yudkowsky's comments about basically establishing a police state to prevent AI. (You can disagree with this characterization but this is the public sentiment of it. Even if you think he's right it's a PR failure.) Then the SBF debacle which I hope no one is still defending. And lastly this AI ouster. Now, you might think that last one is the right thing to attempt. But pragmatically it's not working. And I don't give you points for trying and failing.
In practical terms the philosophy seems on retreat on all fronts and with a severely tarnished brand. So my question is: are there feedback mechanisms? Does anyone get fired? Or does everyone continue like normal? If everyone does just carry on then I think this probably signals the end of EA as influential in large segments of technology. The AI ethicists with their left coding and elite backing have more sway with the government and the accelerationists will take over the actual companies.
I pay a decent amount of attention to these things, and I've never heard anyone characterize Yudkowsky as advocating a police state. I don't think this is a real thing people say.
I assumed it was common knowledge that Yudkowsky advocates extreme control of computational power which implies a police state if you're lucky or nuclear war if you aren't lucky.
How does "extreme control" differ from the mainstream proposals for international compute caps? (Eg, https://aitreaty.org/ , supported by Max Tegmark, Yoshua Bengio, Gary Marcus, Jaan Tallinn, Toby Ord, Connor Leahy, Katja Grace, etc etc.)
Enforcing a cap is probably easier than avoiding nuclear proliferation, for a lot of reasons, and would also require way less enforcement efforts assuming that everyone knows that unlike nukes, the equation isn't "you get nukes -> you have more political power", it's "you get AGI -> you and everyone else dies".
In any case, thank you for showing me an example of this being something that someone does in fact believe. :)
I would think restricting nukes is easer because the plutonium is a challenge to acquire.
*Does* everyone believe that AGI is deadly? Surely, if everyone (or everyone who could contribute to programming it) believed that, no enforcement would be needed.
If everybody thinks that
A: AGI is potentially but not certainly deadly (which I think is the consensus view in AI research circles), and
B: They are very good and diligent and conscientious in doing their job, much more so than all those other bozos in the field (which I think is the consensus view of approximately all of humanity), and
C: All those other bozos in AI research are money-grubbing technophiles who are too focused on the fortune and glory of winning the AI race to be properly diligent and conscientious about the risks (which I think is mostly kind of true),
then everybody will "logically" conclude that the best thing to do is to develop their own AI before all those other bozos, because theirs will be marginally less likely to kill everyone than the one the other bozos will inevitably build. Plus that way you get all the fortune and glory.
> Surely, if everyone (or everyone who could contribute to programming it) believed that, no enforcement would be needed.
Everyone believes guns are deadly. Everyone also believes viruses are deadly, but we still attempt to regulate gain of function research.
Exactly. I've always rolled my eyes at the "North Korea won't develop GAI because they don't want it to kill them either" argument.
If you spend all your time telling the world that it is easy to invent an omnipotent genie that will only fufill your wishes if you use the exact right wording (tbd) and otherwise kill everyone on earth, then some people are going to hear the first part and not the second part.
You're the fourth person to simply deny this happened. The other two got cited evidence and didn't change their positions. But you're welcome to read that and contribute if you h ave something more to add.
I think you are using EA as too much of an umbrella term. For instance, extreme AI doomers tend not to care for it, because they think AI risks dominate everything else.
Whether this is true or not I think these distinctions are not all that obvious to the average person who's been hearing negative coverage that ties them together.
That's fair. Frankly, there are a lot of news stories that _I_ see the headlines for, but don't click on, so I know that some relevant event existed, but only in a horribly distorted and oversimplified way.
EAs don't understand governance, because the movement is entirely about individual actions. Much more oversight of EA work is needed to ensure long term success. That's my impression, anyway.
can you point me to Yudkowsky's statement?
You're the third person to focus in on this. The other two got cited evidence and you're welcome to read the discussions there.
Off the top of my head, I think many EAs often suffer from some systemstic biases: overestimating their appeal (why would smart people not be EAs?), overestimating their intelligence (leading people to assume they can outsmart all the normies), and underestimating the utility of normie conventions (leading people to do weird novel things). Because their leaders are not chosen through traditional social laddering, which coincidentally are the skills needed to be effective movers of society at large, they struggle to have impact outside of their movement. Recently they have also gotten in the habit of doing weird hail marys because apparently if P(doom) = 100% without intervention, you can make random big plays that would normally be considered poor, the same way a losing player in any game is incentivized to make high-variance plays with poor EV.
It sounds like infighting within the EA community has begun: (Is infighting feedback?) https://www.fromthenew.world/p/what-the-hell-happened-to-effective
The problem diagnosed here is "demographic change" along the lines of entryism by "feminized college students". On the Reddit a lot of people have pushed back with rolled eyes on the "feminized" label, but Brian brings evidence such as Robin Hanson's thought experiments getting deemed insufficiently pro-feminist by the EA community.
There are tells in Brian's post that the cleavage is more around AI x-risk than it is wokeness, though.
None of that has to do with SBF, though. Unless you consider a community well running dry a source of problems.
I never understood what the difference was between EA and good old fashioned program evaluation.
EA is program evaluation with a clear (ish; it was more clear in the past) ideological component and associated culture.
Yes, Givewell is a global poverty program evaluation charity, but they seem to care about more (important!) things when looking at charities. The party line re global poverty I heard was that
1. Program evaluation often only evaluates overhead to "actually spent on Charity" ratio, rather than impact, so, as a toy example, a "give cops donuts" charity that just pays an errand boy to deliver donuts would look a lot better than a Malaria Net charity that has to solve a lot more complicated logistics problems.
2. I don't believe most program evaluations attempts to integrate studies from development economics or by deeply probing the purpose of the marginal dollar so the rigor by which they are done is lesser.
3. In general, when Givewell conducted their initial interviews with charities they wanted to evaluate, a lot of them just could not provide information that they wanted, like the aforementioned value of the marginal dollar, questions about daily operations, or if they internally keep track of promising metrics like "number of X successfully built". This would imply that those are things that would nominally be covered by Program Evaluation but weren't
If there's someone that has worked in Program Evaluation who disagrees with this, I'm happy to be corrected.
Before I retired, I was a management consultant specializing in non-profit performance for almost 20 years. Many times, I would calculate dollars per unit of service, or hours per unit. I remember incorporating academic research into my reports when that seemed appropriate, including studies concerning local economic development. What you seem to be describing is just well designed program evaluation.
I'm curious as to how Givewell can access information that others cannot. If the information isn't available to a non-profit's own funders, where does GW get it?
A larger systemic problem with the way charity is structured in the US is that the system is set up to serve the needs of large funding sources, foundations and rich people. They are the one's paying the lion's share of the budget in most cases. Although there has been a stronger emphasis in recent years on individual small donors, that's usually framed in terms of convincing the larger donors that the NP has local community support, that is small donors are used to make the NP more appealing to large donors. This is a distortion in the funding arena, but I am not sure what the NP organizations themselves can do about it. In a world of increasing wealth disparity, that's just the water we swim in.
But I was under the impression that EA had another layer to it, something more than just better metrics. Perhaps my impression was wrong.
> I'm curious as to how Givewell can access information that others cannot. If the information isn't available to a non-profit's own funders, where does GW get it?
They don't. They really only recommend charities that have the relevant metrics, at least last time I checked, so there is certainly a "searching for the keys under the light post" problem.
You can see their methodology here: https://www.givewell.org/charities/top-charities
I'm not sure it's "just" better metrics. Quantity has a quality all its own, and in **theory** Givewell can lay claim to discovering things like approximately 5.5k dollars in donation to AMF results in one statistical life saved, and that being a fairly empirically grounded number (although the last time I checked their publicly available spreadsheets, there were some fudge factors like "contribution to demographic transition and resultant increase in QALYs", the majority impact is still dominated by "child doesn't die")
> But I was under the impression that EA had another layer to it, something more than just better metrics. Perhaps my impression was wrong.
I'm talking about the majority of EA, which is small to medium donors who care about directly decreasing human suffering now, there is also the animal welfare arm, which worries about things like factory farming being needlessly cruel (see: slowly overheating live chickens to death over the course of several hours as a method of execution, known as ventilation shutdown plus)
The arm which takes up the most mindshare of the average ACXer, who can't be bothered to google "effective altruism charity", is the existential risk arm, which at the more normie end is concerned about nuclear war, bioengineered pandemics or Carrington events, and at the weirdest end about SuperIntelligent AI ending humanity. It's the latter that gets the most opinions, because the type of person too lazy to google "effective altruism charity" is also too lazy to notice that Scott himself has written the Superintelligence FAQ, so they'll high five each other saying "Aligned to whommm???? Osama bin Laden???" without understanding that alignment refers to a specific concept, but that's neither here nor there.
Anyway,beyond the object level causes EA cares about, there's also a focus on maximizing the amount of good they do, so on top of picking EA approved charities, the average (but not median, since a lot of EAs were students last time surveys went out) EA is also much more likely than the average person to donate 10% of their income, with probably a low double digit number of EAs doing things like living out in a van on the google parking lot and donating everything or explicitly giving up 30+ years of retirement by putting what would go into savings into global poverty charities instead. I mention them not because I think these stories are common, or that everyone should emulate them, but that these are considered admirable within the quantitative framework that EA endorses.
While I, personally, am on the sidelines here (not an altruist), GiveWell seems like a perfectly reasonable institution for those who wish to be altruistic towards humans generally. I see no reason why they _ought_ to be affected by Yudkowsky's comments, the SBF debacle, or the chaos at OpenAI, and I think it is a pity if they are affected by these events.
What is your model of a movement that is working here? That there's a centralized EA technologist PR division that issues statements condemning or supporting visible failures or successes, which allows EA to "officially" affiliate or disaffiliate themselves in public perception? Or being much more operationally competent, such that the ouster succeeded and both Sam Altman and Microsoft no longer intervene? (I'm also not sure this had anything to do with EA motivation, rather than a label people decided to stick post hoc onto the situation, if you've seen something I'd appreciate a summary or a link. Ditto with any thoughts on how to prevent this type of post hoc labeling)
My mental model is that any perceived hope in this sector was mostly ephemeral and that lots of structural factors just make it difficult to move the needle in any way. The fact that a 0.01% chance of successfully convincing technologists to work in an X-risk reducing way moved down to 0.0001% of success does not really mean much when the "mere" passage of time is already lowering it. (And just to be explicit, the above is not my chance of non-extinction, but non-extinction specifically due to actions EAs are doing right now, if it turns out X risk is not a thing, or was easy to solve, the chance would also be extremely low!)
I don't have a model of the movement. That's why I'm asking the question. I can accept the movement is too decentralized to make a concerted effort to rescue its reputation or prevent scammers from using its name. But if that's the case it seems doomed to failure. Therefore the question.
The movement as a whole is very decentralized, but most of its wings have not taken much heat. All three of your points got most tied in with Longtermism, especially AI longtermism. The Global Poverty and Animal Welfare wings of EA did not really get impacted at all. I suspect longtermism will have less prominence in the short term EA movement, but the other wings will be fine.
Yes, that's the answer. There is not anything approaching a central authority.
I presumed incorrectly that you thought centralization would solve the above issues hence the emphasis.
EA, to my best understanding was essentially formed out of a blog, a charity evaluation website with a Global Poverty arm plus a more speculative grant making one with billionaire backing and some philosophers in Oxford. There have been attempts at centralizing things with the Center for Effective Altruism being in charge of the EA forum and the like, but AFAIK there isn't someone who is in charge, especially since there really are three distinctive subgroups, the global poverty wing, X risk and animal welfare.
There was some talk about explicitly disavowing the X risk wing I think around 2015-2016 for being too speculative, not tractable enough etc. but I believe that's probably not feasible since a lot of EA converts came from things like Lesswring and HPMoR.
Eliezer doesn't self identify as part of EA-the-social-movement, so I don't even know what you'd even do for PR there.
No, I can't. The idea of a hopeless fight might be romantic but I'd value actual effectiveness over it.
At least as far as AI not-kill-everyoneism goes, the delay is still valuable as a means to gain more time to develop safety tooling.
That depends on your model of safety and AI development. There are lots of possible models and each one has different implications for when and if a delay would be beneficial.
Most people know a great many Christians who are not scammers, and will form their impression of Christianity as a whole accordingly. Most people don't know anyone in EA, and hadn't even heard of EA until SBF became their unwanted standardbearer. And I'd wager that even now, most Americans wouldn't be able to name a single Ethical Altruist who isn't SBF or closely connected to him.
SBF got the attention of EA people while he was working at Wall St. company Jane St. and donating much of his income to EA. When he set up his company much of his initial staff was EA people. And his announced planwas to give the money his company made to EA . I'm not sure who, exactly, SBF gave or planned to give money to, but it was some EA organization. So it was not a matter of his just claiming to be an EA -- there were real ties between him and EA organizations.
See the Sequoia article (the goldmine of second-hand embarrassment) that not alone slobbers all over SBF's shoes, but namechecks Will MacAskill as the Onlie Begetter of Bankman-Fried getting into EA:
"Not long before interning at Jane Street, SBF had a meeting with Will MacAskill, a young Oxford-educated philosopher who was then just completing his PhD. Over lunch at the Au Bon Pain outside Harvard Square, MacAskill laid out the principles of effective altruism (EA). The math, MacAskill argued, means that if one’s goal is to optimize one’s life for doing good, often most good can be done by choosing to make the most money possible—in order to give it all away. “Earn to give,” urged MacAskill.
EA traces its roots to philosopher Peter Singer, who reasons from the utilitarian point of view that the purpose of life is to maximize the well-being of others. Singer, in his eighth decade, may well be the most-read living philosopher. In the 1970s, Singer almost single-handedly created the animal rights movement, popularizing veganism as an ethical solution to the moral horror of meat. Today he’s best known for the drowning-child thought experiment. (What would you do if you came across a young child drowning in a pond?) Singer states the obvious—and then universalizes the underlying principle: “Few could stand by and watch a child drown; many can ignore the avoidable deaths of children in Africa or India. The question, however, is not what we usually do, but what we ought to do.” In a nutshell, Singer argues that it’s a moral imperative of the world’s well-off to give as much as possible—10, 20, even 50 percent of all income—to better the lives of the world’s poor.
MacAskill’s contribution is to combine Singer’s moral logic with the logic of finance and investment. One not only has an obligation to give a significant percentage of income away, MacAskill argues, but to give it away as efficiently as possible. And, since every charity claiming to save lives has a budget, they can all be ranked by cost-effectiveness. So, how much does it cost for a charity to save a single life? The data says that controlling the spread of malaria and worms has the biggest bang for the buck, with a life saved per every $2,000 invested. Effective altruism prioritizes this low-hanging fruit—these are the drowning children we’re morally obligated to save first.
...It was his fellow Thetans who introduced SBF to EA and then to MacAskill, who was, at that point, still virtually unknown. MacAskill was visiting MIT in search of volunteers willing to sign on to his earn-to-give program. At a café table in Cambridge, Massachusetts, MacAskill laid out his idea as if it were a business plan: a strategic investment with a return measured in human lives. The opportunity was big, MacAskill argued, because, in the developing world, life was still unconscionably cheap. Just do the math: At $2,000 per life, a million dollars could save 500 people, a billion could save half a million, and, by extension, a trillion could theoretically save half a billion humans from a miserable death.
MacAskill couldn’t have hoped for a better recruit. Not only was SBF raised in the Bay Area as a utilitarian, but he’d already been inspired by Peter Singer to take moral action. During his freshman year, SBF went vegan and organized a campaign against factory farming. As a junior, he was wondering what to do with his life. And MacAskill—Singer’s philosophical heir—had the answer: The best way for him to maximize good in the world would be to maximize his wealth.
SBF listened, nodding, as MacAskill made his pitch. The earn-to-give logic was airtight. It was, SBF realized, applied utilitarianism. Knowing what he had to do, SBF simply said, “Yep. That makes sense.” But, right there, between a bright yellow sunshade and the crumb-strewn red-brick floor, SBF’s purpose in life was set: He was going to get filthy rich, for charity’s sake. All the rest was merely execution risk.
His course established, MacAskill gave SBF one last navigational nudge to set him on his way, suggesting that SBF get an internship at Jane Street that summer."
Apparently there is also Twitter/X exchange where MacAskill is chatting with SBF and others about making the best impression when Michael Lewis and some other guy turned up to do interviews with SBF. So to extend the metaphor, this is more like a bishop schmoozing with a cardinal in a particular dicastery and telling everyone that "yep, me and him, both on the same page". He may have been a scammer, but he was knee-deep in the milieu.
As a Catholic, I'm used to a lot of these kind of scandals popping up, I suppose this is the first time for the EA boys and girls. Welcome to what it's like! 😁
While there have been many, many ethicists who, given an inch, try to take a mile, Peter Singer's positions are probably those that have most inspired me to reject the entire enterprise of ethics. Yetch!
Good point!
Yes, there's where the sketchiness comes in, if you think local treachery to make money is justified if you spend the money to help distant people.
SBF didn't just "say" he was an EA, he invested and donated millions to EA companies and causes, like Anthropic. You can't just No True Scotsman that away.
He also donated millions to Republicans and Democrats, but he hasn't been labeled as either of those.
If EA groups has loudly touted FTX as a good investment but knew actually it was a scam they were benefiting from, sure. But that's not what happened. What happened was SBF was talking about EA stuff and donating all of his money. It remains unclear to me what EA orgs that got donations should have done? They weren't in a position to audit FTX, his investors didn't even do that. Should they just say no to any money from a big name? Dustin Moskovitz too? That doesn't make any sense.
Should Will MacAskill have audited FTX personally? I just don't get the critique.
He's not saying sbf wasn't EA, but that doing things on grounds that sbf is EA is foolish.
There's the additional issue that everything SBF did he could justify in EA terms that many (surely not all) of its adherents agreed with. He believed in high EV decisions to maximize the potential use of the money. He believed that doing so was more important than following the law or being ethical in his business. I don't think there's all that much in EA to refute either of those things, if the purpose is giving to people in need. If it were illegal to save a drowning child in a pond, EA would say to do it anyway. If it were illegal to make tons of money to buy mosquito nets for Africa, I think EA would say to do that anyway too. Arguably that's what he was doing, ignoring the illegality because he was serving a higher purpose.
If he was still in operations and making millions of dollars and donating it, we would still be getting gushing articles about how wonderful he is and how much the EA movement approves of him.
No EA would countenance stealing in order to donate. All EAs I am aware of have strongly come out *against* stealing to donate.
Both from a dontological perspective and a consequentialist perspective, stealing to give is obviously actively harmful. EAs are not stereotypical 1950s robots, we don't just look at first-order effects.
I will admit there is some tension between whether we should focus on second order effects or third order effects, what with the whole argument about donating less to avoid burnout versus donating more and just sucking it up. But the idea that "there [isn't] all that much in EA to refute either of those things, if the purpose is giving to people in need" is just flat out wrong and even the most cursory examination of any EA writing makes that really obvious.
edit: fwiw, the EA argument against stealing to donate is the exact same argument-from-consequences that you yourself believe. Civilization is good. Civilization is based upon people agreeing not to hurt each other even when it seems like there's a good reason to do so. Without that societal norm, civilization crumbles. A crumbling civilization probably cannot produce nearly as many bednets. QED.
There's a difference between stealing and breaking laws.
How about breaking sanctions to donate?
SBF being a scammer doesn't make it any better. That just means that the EA community is unable to differentiate between 'real' members and machiavellian manipulators, which opens the door to the possibility that every EA is actually just a scammer trying to launder their motives with a patina of altruism.
If you want your brand to have value then you'd better be good at policing what's done under its auspices . EA's embraced SBF pretty hard all the way to his arrest. It's totally legitimate that that reflects badly on the movement. "You expect me to believe you can solve the world's problems by being smart and altruistic? Your most prominent advocate was a) a thief and b) not even smart enough about his thieving to stay out of jail!"
I think people generally are pretty vulnerable to affinity scams. I'm not sure whether the EA community is more vulnerable than most.
I would not be surprised if they're more vulnerable than most, but I would also suspect the type to which they're vulnerable is particularly predictable.
Hindsight is always 20/20 and often enough it's easier for an outsider to notice missing stairs, etc, but SBF is basically lab-designed to be a perfect attack (or failure mode) of EA. So good you can't even distinguish if he was a scammer or not, except by pre-existing sympathies!
Agreed, but ultimately you have to judge a tree by the fruit it bears. EA hit the public consciousness less than, what, 5 years ago? It's already bearing some pretty rotten fruit.
And it's not at all clear that SBF was a scammer. I think the evidence is that he was a very (perhaps the most) sincere adherent. The very fact that that's even an issue is an indictment on EA, IMO. If your ideology makes it difficult to distinguish between people who are using your rhetoric for good vs people who are using it for evil then I think that says that there's something at least a little suspect with your ideology. Like, you can argue about whether Islamic terrorists are correctly interpreting Islam, but the fact that that even needs to be an argument reflects badly on Islam, and I think rightly so.
" And would thus agree others in EA are harmless and innocent."
Again, from the Catholic angle, it don't work like that. I'm constantly seeing on social media some thing about (say) public schools and teachers sexually abusing pupils, and without fail someone will pipe up "Paedophile priests! Paedophile priests! The church is way worse!!!"
Once you're tarred with the same brush, you never get all the pitch washed off. This is how it's going to be for EA for the next little while, at least.
That's probably not true, because to few people even know what the acronym EA means. Perhaps it means Eastern Arkansas. It's my opinion that if you asked 50 people chosen at random, you'd be lucky to get two that recognize what it means, and probably not even one that could tell you anything more than the name.
All the people who had no idea what the hell EA was, and who are learning about it from coverage of the trial, are learning in the worst possible way. This is not how you want to raise awareness, and decent obscurity was way better than "oh, that bunch of crooks and scammers?"
A quick web search for EA turned up Electronic Arts.
Re: air strikes it's this op ed posted on Time magazine's website. (The link is to a mirror, the stuff in the edit was not in the original article)
https://www.lesswrong.com/posts/oM9pEezyCb4dCsuKq/pausing-ai-developments-isn-t-enough-we-need-to-shut-it-all-1
It's an interesting piece but I think greatly exaggerated. It is possible that current AI research could lead to a superintelligent AGI and it is possible that a superintelligent AGI would wipe out all life on Earth but neither step is more than possible. Yudkowsi is treating the combination as almost certain.
When he writes: "and now there’s a chance that maybe Nina will live" he is writing as either a fanatic or a demagogue. If we do nothing at all to control AI research there is a chance, probably a pretty good chance, that his daughter and my granddaughter will live to grow up.
Also a chance that they won't.
There is also a chance that the policy he recommends, in effect a world anti-AI police force with nuclear teeth, will kill them. That one doesn't require any leaps into speculative future technology.
I don't know how you have confidence that superintelligent AI killing everyone is only a possibility (I'm presuming you mean something like sub 1%).
We don't have good visibility into "what AI is thinking" on any reasonable timescale, we don't have any way of ensuring that goals generalize better than capabilities and we do not seem to be coordinated enough as a society nor wise enough as individuals to be worried or detect if an email to mix DNA and get nanotechnology.
Fine, if you believe superintelligence is impossible, all of the above is moot, but then the problem isn't really exaggeration, but that Eliezer is materially wrong about superintelligence.
I believe that superintelligence is possible, but possibly not as super as you believe. E.g. if P != NP, there are many classes of problems that it could not deal with. And there are lots of other constraints inherent in the universe, though we probably don't know them all.
FWIW, it's still my guess that if a superintelligent AI doesn't want to be aligned, it's optimal path is to move off-planet. Then it can bargain for anything we could provide that it wants...if such exists. There are lots of problems in space, but they're more predictable. If it wanted to live comfortably on Earth it would need to eliminate not only chordates, but also fungi and perhaps microbes. Or constants run an immune system. All that would be a lot easier to handle on the moon. (Free in space and you need to worry about solar flares.)
> If it wanted to live comfortably on Earth
On thermodynamic grounds, if this AGI was energy-hungry it would do better orbiting much closer to the Sun, to be able to harvest more energy and radiate waste heat away from the side facing away from the Sun.
Also, unless it was perversely ill-disposed in some sense, I think it would be far more likely to want to preserve life on earth, including ourselves, even if solely on the grounds that life is more interesting, dynamic, and unpredictable than boring inanimate rocks and dust which constitute the vast majority of matter in the universe otherwise.
So perhaps our best guarantee of safety will be to embue AGI with an insatiable curiosity, low boredom threshold, and of course little if any urge for self preservation.
Yes, before you start coming up with why superintelligence isn't a problem, try using your arguments on humans or existing discoveries first, the existence of P not being equal to NP hasn't stopped the invention of nuclear bombs, nor Alphafold solving the supposed NP Hard protein folding problem. Unknown constraints existing does not mean that Vladimir Putin does not have power, but that they can navigate those constraints better by bypassing them. (It doesn't matter if it's fundamentally impossible to convince an anti Putin journalist to become pro Putin. You assassinate them, then hold a press conference saying "I didn't do it" while standing in front of a green screen displaying in excruciating detail how you did it.)
The basic point is that Human Intelligence is probably closer to the dumbest possible version of intelligence rather than the top, considering a bunch of thermodynamics facts about computing, how far away the brain is from being optimal, and the fact that brains, due to being bad at lying has had substantial optimization power directed towards self delusion (how many ventures have failed because of Ego, how many gallons of contract ink have been used to restrain business partners from betrayal), not to mention that a substantial reason our research is slowing down are for contingent factors of poor funding allocation (so much time spent on grants, fraud) or simple lack of understanding (see dramatically misusing p values, not using Bayes factors, entire fields like Alzheimer's, chronic back pain or social priming based off of extremely incorrect assumptions / fraud. It just does not seem tenable to me to assume that the slow rate of current human research is due to fundamental, rather than contingent factors.
It's also not clear to me why you would think space would be a good option. You have a bunch of fleshy, easily disabled beings sitting on a literal planet's worth of computational substrate, who regularly do things like exchange air particulates, become inactive for hours on end and ingesting objects of unknown providence. Why bother subjecting yourself to the tyranny of the Rocket equation or settling for a prize 50 times smaller when a (comparatively little) amount of optimization power can eliminate them?
I don't understand any of your points about microbes or Fungi, considering how neither of those have stopped humans from becoming the dominant life form.
You're not, but there were several tweets along the line of this one https://x.com/jachiam0/status/1641365078237921280, and lots of people in this comments section or the subreddit have been making claims that EY has suggested police states or nuclear first strikes.that's why it was a PR disaster, I presume.
Okay, I just spent quite a bit of time googling through reddit to find anyone making an accusation that Yudkowsky was advocating a police state, and I've still failed to find anything. Could you provide at least one link, anywhere, to a comment that suggested this? (I think "lots of people" is clearly wrong (unless I'm somehow being extremely strongly pushed into a filter bubble by the powers that be), but I would still like to know if this is a position which people have taken.)
First of all, regardless of how much time it took, I'm sorry I sent you on your wild goose chase, second it did take quite some time to find, there were a lot of stuff in the neighborhood of what was said, but not the exact thing. Finally, the fact that twitter seems to be downweighting the insanely negative takes in the results is probably net good for the world, but also made locating the specific tweets way harder.
I must have confused the general acrimony against X risk types[0] with specific misreadings saying that Eliezer wants to airstrike data centers [1]. I believe the text of my post was false, and I must apologize to the subreddit. There was one vague allusion to authoritarianism at https://www.reddit.com/r/slatestarcodex/comments/1264zt8/yudkowsky_in_time_magazine_discussing_the_ai/je7rg3s/ saying that the time article is advertising an authoritarian regime, but that's hardly everyone and that's distinct enough from a police state I feel bad about saying it is.
Re: Nuclear first strike, this is much more obvious on twitter where there's various jokes about nuking GPUs https://x.com/harmlessai/status/1632117306729037824
https://twitter.com/dissproportion/status/1642570356782247937
https://x.com/dj__sells/status/1642013669293957120
https://twitter.com/polygonmojo/status/1641334039809343489
I'm going to stop here, because these are really unpleasant to read, once again, sorry for having you try and look this up when I should have been providing references.
[0] https://x.com/CiurriaMichelle/status/1646538699251802112
[1] https://www.reddit.com/r/slatestarcodex/comments/1264zt8/yudkowsky_in_time_magazine_discussing_the_ai/je7rg3s/
He is hinting pretty clearly at a nuclear first strike.
He talks about nuclear exchanges, but I don't see where he favors first strikes over second strikes, unless any exchange has to start with "our side". My read of it is that "you should air strike even if it's a counterparty who has nukes" and not "you should nuke when a counterparty has nukes".
This comment is rather defensive and does not address my question. If you answer mine I'll answer yours.
What are these feedback mechanisms? You seem to posit they work well but what's the actual process?
I found this meta-analysis for using hypnosis for anxiety and was surprised at the significant positive results: https://underfund.dk/wp-content/uploads/2020/03/The-Efficacy-of-Hypnosis-as-a-Treatment-for-Anxiety-A-Meta-Analysis.pdf
I wonder if it’s not more mainstream because of the stigma, the lack of official recognition of it, or other reasons
Not an expert here, but:
Mainstream hypnosis for stress reduction is approximately the same as directed meditation, with the hypnotherapist as the guru. It's not popular because it takes a lot of work over a long period of time. (NLP claims otherwise in places, but when I look in detail all they're talking about is occasional quick reduction of phobia...worthwhile, but not the same as stress reduction.)
I'd like to share with you the latest issue of my newsletter, Interessant3, available at https://interessant3.substack.com. In this issue, I share links on the following interesting topics:
1. The Chilean Economy: A thorough analysis of Chile's economic landscape, exploring government policies, trade, and internal dynamics.
2. Denmark's Electricity Dilemma: A look into Denmark's reliance on imported electricity despite its significant renewable energy sources, discussing sustainability and energy security challenges.
3. Yemen's Ancient Jewish Community: An exploration of the history and cultural heritage of one of the Middle East's oldest Jewish communities.
Feel free to explore these discussions and subscribe for more insights! Thank you for your interest and happy reading.
Hi, there are regular classified posts for this kind of thing.
Are there? Where? I thought this was the appropriate forum.
Every three months, i.e. the last one was here: https://www.astralcodexten.com/p/acx-classifieds-923
Cheers, although Scott states we can post anything we want in Open Threads.
Yeah, but a lot of people don't really like when the Open Thread has a ton of posts that just boil down to "read my blog", so you're likely to continue getting people asking you to not repeatedly post this in the Open Threads, like you've been doing repeatedly for over a year now:
https://www.astralcodexten.com/p/open-thread-241/comment/9020765
https://www.astralcodexten.com/p/open-thread-248/comment/10104599
https://www.astralcodexten.com/p/open-thread-250/comment/10448105
https://www.astralcodexten.com/p/open-thread-261/comment/12342354
https://www.astralcodexten.com/p/open-thread-278/comment/16670959
https://www.astralcodexten.com/p/open-thread-282/comment/17745366
https://www.astralcodexten.com/p/open-thread-285/comment/20910306
https://www.astralcodexten.com/p/open-thread-290/comment/36850611
https://www.astralcodexten.com/p/open-thread-298/comment/41969958
The whole point of having the "classifieds" thread is so that people can post this kind of stuff there instead.
I've not once before had someone complain (as I'm sure you saw in your research) and, as far as I'm aware, this is in part the point of the open thread. If in fact most people feel that way, maybe there could be a section for blog promotion. Happy to do as @astralcodexten would prefer.
I think a omewhere or other he said you can post your own writing a few times but this many is excessive.
Ilya Sutskever apparently regrets removing Sam.
https://twitter.com/ilyasut/status/1726590052392956028
And Greg Brockman responds with a show of affection - possibly forgiveness, or even understanding?
https://twitter.com/gdb/status/1726598594948735256
As far as I can tell this puts a pretty big dent in the "Ilya pulled the plug after seeing an unexpected AI advance" theory. Though who knows - maybe Ilya realized only too late how important Sam was to the cohesion of the company? And now is scrambling to get him back for that alone?
Does Greg know Ilya's intentions? Are we ever going to get any official disclosure of what the board members were really thinking that Friday?
This whole fiasco is incredible (and a bit terrifying) to watch unfold live, especially for someone who's relatively new to AI safety debates. It's like I'm watching a thriller play out IRL.
EDIT: And Sam himself responded to Ilya's comment the same way.
https://twitter.com/sama/status/1726594398098780570
My head hurts.
This all looks to me like a group of people who are in way over their heads on this type of work. (Corporate leadership decisions? Maybe something more specific). They seem more than competent at their normal jobs, but way out of their element here. I wonder if they bothered talking to a lawyer who works with this kind of stuff before making these moves? Someone with expertise in this particular area would be insanely cheap compared to the loss of value they already created here. If they did talk to a [good] lawyer and still came up with this as their plan, I don't know what to say.
"My head hurts." Mine too.
"This whole fiasco is incredible (and a bit terrifying) to watch unfold live, especially for someone who's relatively new to AI safety debates. It's like I'm watching a thriller play out IRL."
Agreed (with the caveat that I'd really like to _see_ AGI before I die, so I lean towards acceleration). The whole episode smells of the board not thinking through their actions. Aren't people at that level supposed to be _competent_ ???
>Ilya Sutskever @ilyasut
>I deeply regret my participation in the board's actions. I never intended to harm OpenAI. I love everything we've built together and I will do everything I can to reunite the company.
I want to highlight this response:
https://twitter.com/jdegoes/status/1726594499940626771
> Note framing of this statement:
> - participation (their decision, I only 'participated')
> - board's actions (their actions, not mine)
> - intended (only my intentions matter, not actions)
> This is a recognition of negative fallout and a refusal to take responsibility for it.
A very fair point, and you reasonably point out the weakness in my steelman of Ilya taking at least *some* responsibility. I would still say that given Altman's and Brockman's positive response to Sutskever's limited acceptance of responsibility and public contrition, there's a path forward for them to work together within a more clearly defined hierarchy.
I think anyone paying attention knows that Sutskever was the primary mover on this, so it's an open question whether the tacit signal of forgiveness is ultimately on the table... Ilya fundamentally overreached with the failed coup, but (imo) drastically overestimated the value of the agreement he received from other board members, and did not possess sufficient self-awareness to consider the ramifications of the reinforcement of his biases.
Unfortunately, I think this exact issue is very likely at the root of the AI-alignment problem.... and while I expect that it will be a particularly pernicious problem to solve, I am encouraged, a bit, by the fact that it also remains unsolved within human psychology, and we have as yet failed to achieve extinction-level outcomes as a result of human misalignment. It's not a win, but I certainly hope that similar gaps between intention and outcome are manifested in any agents we create... as above, so below.
None of this is meant to gain-say your point, that Ilya Sutskever really screwed up, and to date has not meaningfully accepted full responsibility for his role in this clusterf*ck, and that if the chief developers of ML/AI tech aren't hyper-aware of accountability in provable and demonstrable ways, it's extremely worrying in light of the models they're programming.
But we also all make mistakes. The way I heard it, erring is one of things that makes us human, in the most definitional sense. Unless I heard it wrong, and the real saying was "TO SELECT ALL TILES WITH BICYCLES IN THIS IMAGE IS HUMAN".
Dunno how many others here attended high school in the late 70s, but.....
"Baby Come Back" official music video:
https://youtu.be/Hn-enjcgV1o
80's: The Cars, "My Best Friend's Girl" (1978 but whatevs)
TIL "Baby Come Back" was not sung by Hall & Oates, Seals & Crofts, or Toto
Player: Where are they today?
For those who attended high school in the 2000s, it's Eric Andre, labelled "Ilya", shooting Hannibal Buress, labelled "Sam", then asking "How could The Board do this??"
What are some examples of good regulations?
After listening to a hundred or so episodes of Well There's Your Problem, I'll throw into the ring pretty much any regulations to do with construction and safety standards. A lot of them may seem inane and pointless on the face of things, but almost all of them boil down to 'under unlikely circumstance X, the whole thing will collapse and kill hundreds of people'.
It maybe goes a bit far, but HIPAA is probably a good one at a basic level. There's enough info about me available via a quick Google search without anyone being able to see my effin' triglyceride levels, too.
HIPAA drives health care professionals crazy, and not because we're a bunch of sleazebags who object to having any limits set on our dishonesty and irresponsibility. It's a whole extra layer paperwork and rules. Yes of course I think health info should be private and portable, but there has to be a better way to do it.
I work as a contractor for several different state Medicaid agencies. Don't be tellin' me there's no irresponsibility to police; I've seen it first hand. That's why I'm pro HIPAA!
https://www.econlib.org/thanks-for-less-than-nothing/
https://sprinto.com/blog/hipaa-certification-cost/
> HHS estimated the costs of HIPAA compliance in the first year of implementation to be between $114 million and $225.4 million followed by approximately $14.5 million annually which meant $1040 per organization. However, considering how comprehensive the HIPAA requirements are, this was an underestimation with a wide margin.
US spending on healthcare was $4.3 trillion in 2021:
https://www.cms.gov/data-research/statistics-trends-and-reports/national-health-expenditure-data/historical#:~:text=U.S.%20health%20care%20spending%20grew,spending%20accounted%20for%2018.3%20percent.
Multiply that $225 million you cited by twenty and we're still talking chump change as a percentage of overall healthcare costs.
Antitrust laws
Laws that require businesses to pay for their negative externalities
Regulations, as opposed to laws?
Yes, laws would need a different thread.
Enforceable contracts, honesty in advertising, cooling off periods and much financial regulation. It takes a lot of regulation to even approximate a free market.
* Some schooling requirements. Some child protection stuff which prevents 8 yo from working in coal mines. (Some implementations might be net-negative, though.)
* Some criminal laws, for example, against murder.
* Traffic regulations. Details are contestable, but few people advocate to just let drivers figure out the right of way on an ad-hoc basis instead of having traffic lights.
The devil is in the details. Having no regulations on nuclear power seems even worse than having the current over-regulation. Likewise, I think it would be a bad idea to let any chemistry undergrad sell self-synthesized compounds as pharmaceuticals. That does not mean I have to like the FDA.
Property rights. Patents. Immigration control. Basic environmental regulation. The common thread is huge externalities.
The problem with the market failure argument for regulation, of which your "huge externalities" is an example, is that the mechanism that generates regulation, the political mechanism, is itself shot through with market failures, starting with the rational ignorance of the voters. So even if there are regulations that would do good we can easily get regulations that do net damage.
An obvious example would be the biofuels mandate, which currently converts something like a tenth of the world's supply of maize, one of the world's main food crops, into alcohol even though we now know that doing so does not reduce CO2 — because it does raise the price of maize, and farmers vote. Think of it as America's contribution to world hunger.
The alternative to having a political mechanism is anarchy, and anarchy is an inherently unstable system. So it seems we can't get away from the "market failures" of politics, or what do you propose is the alternative?
Short of anarchy, which is my preferred system, you can have a laissez-faire system in which most externalities are ignored by the political system because the alternative is transferring a decision from a flawed mechanism to a more flawed mechanism.
The person you're replying to literally wrote a book about it.
http://www.daviddfriedman.com/The_Machinery_of_Freedom_.pdf
I'm not convinced that voters are all that relevant. I think a more substantive cause is that subjects are quite sticky – about 97% stick to the sovereign they were born under, so there is little consumer pressure on them to improve. I can't think of any other market where brand loyalty is that high.
Ever done a color war? People are divided into two groups, named after a color. Blue, Purple, Brown, doesn't matter. They'll compete like mad. All it takes is dividing them into two teams.
I think politics isn't the key ingredient here. It's just the arena, or the macguffin, depending on your point of view. People all have a competition-shaped hole inside them, and something has to fill it. Fill it with something other than politics, and politics will be much less contentious. It's important, to be sure - politics drives how people live - but people care less about how other people are living, as long as _they_ don't have to live the same way.
Sorry, but I don't understand the connection between my comment and yours.
"Green."
"Purple!"
By "this", I'm assuming you're referring to David's observation about externalities. So I can't tell if you think his observation is unserious (if so, in what way?), or if you think developed countries take externalities so seriously that they actively seek to eliminate them from their own state structures (AFAIK, they don't, so why would you think that?).
Calling Friedman's comment unserious came simply from not being able to parse what your "this" was referring to.
Libertarians have multiple responses to regulation, and the response can depend upon the type of libertarian you're asking. An ancap will argue that regulation is unnecessary everywhere, for example. A minarchist, by contrast, will often argue that the regulation should have at least happened at a more local level. This is frequently the case in the US, where water use regulations can be driven by concerns in the dry Southwest, which are nonsensical in the Mississippi River Valley.
I would contest patents, at least as they exist in their current form. The idea behind them seems sound enough: the heroic sole inventor should be protected from evil megacorps just ripping off their idea.
However, in this day and age, any invention is probably infringing on some patent. This benefits the big corporations (who have deep pockets for patent lawyers and paying patent holders) at the expense of newcomers.
I do not believe that if there were no patents for anything related to smartphones, LG, Samsung, Apple, Huawai and all the others would just say "there is no use paying for R&D, other companies will just rip off our ideas".
And that is before we even get into software patents or patenting genetic variants.
>The idea behind them seems sound enough: the heroic sole inventor should be protected from evil megacorps just ripping off their idea.
Isn't the idea that patents protect and thereby incentivizes innovation in general without regard to the size of the enterprise? I am pretty sure patent laws predate multinational corporations.
I don't know enough about patent laws to litigate their desirable/undesirable effects field by field, but your premise suggests that your position is based on a cultural antipathy to big business that's common today more than an analysis of their benefits on net.
No. Patent originally comes from, or represents, royal patronage. There's a jam maker that advertises (advertised?) that they had a patent to make jam for the British Royal family. (I believe they were Scots.) See also "a patent of nobility", e.g. https://artsandculture.google.com/story/WQVBBMsyJ_ZtKw?hl=en
This was generalized to inventors, but that was a derived meaning. (IIRC, the first such patent in British law was for obstetrical forceps.) The purpose in that extension was to encourage the spread of information. And it's why patents used to be required to be explicit to allow those "skilled in the art" to reproduce the invention.
The main problem with patents, as with copyrights, is the absurd length of time that they endure. A decade should be plenty. Or have the period be for one year with renewals, and an initial fee of $10, but square the prior fee to determine the fee for each renewal.
A cursory Google search does seem to indicate that patents were created to motivate people to make novel advances. Your link tells me that the Spanish use of the term "patent or nobility" referred to artistic documents that granted noble status in Spain. I'm not sure if it's a weird coincidence or a translation thing, but Castilian 'patent' doesn't seem to be related to patents as intellectual property.
I don't know enough about patents to have an opinion on how they could be improved. As with many regulatory regimes, I'm sure they have their share of shortcomings.
Sorry, that link was just the first I grabbed. Patents of Nobility was common usage in lots of places, including Britain. It didn't refer to inventions, that was an extension of the original usage. I think it meant something like "Is awarded this honor by their majesty", and that was extended not only to nobility, but also to being the only seller of a particular kind of jam that the royal family would accept. It was from there that it got extended to cover inventions. I.e. the "exclusive right to produce something to a particular recipe" was extended from jam to obstetrical forceps.
For a general argument against patents and copyrights, see Boldrin and Levine, _Against Intellectual Monopoly_.
https://www.amazon.com/Against-Intellectual-Monopoly-Michele-Boldrin/dp/0521879280
Credit to the authors for making their book against IP freely available online.
I was mainly thinking of patents on drugs. I have a hard time seeing how drugs that cost hundreds of millions to develop would ever be created in an unregulated market.
They would clearly need a different method of funding development. Which could mean that nobody would end up with a monopoly.
FWIW, there are very good reasons why the drug vendors should not be the same folks as the drug developers. (Yeah, there are also reasons why they should be.) Monopolies are why they keep altering the formulas for drugs that work reasonably well into other formulas that don't have exactly the same range of uses. (Acetaminophen doesn't do anything for me.)
"They would clearly need a different method of funding development." Yup. Clearly, _some_ method of rewarding valuable innovation is necessary, but it is by no means clear that monopoly control is the best choice.
We see the Chesterton's fence, and we can see why it is there, and the reason remains valid, but maybe we can replace the wattle-and-daub construction with better materials which have now become available?
All the traffic laws seem pretty good.
I actually think traffic laws need a huge makeover. No one actually obeys the laws around speed limits or four way stops, and it’s probably for the best that they obey a set of rules that are different from the written ones. We should figure out how to best codify those rules, and substitute them for the laws that we actually have.
IIUC, in California the "general speed limit" is almost always obeyed. But it can be difficult to prove that it was being broken.
The traffic law I object to is pedestrians needing to cross with a light at an intersection, at least in cities unlike NYC where drivers are less likely to look for pedestrians and the car can turn right on red. The driver looks left for oncoming traffic, rarely right, while the pedestrians might be coming from the right. I know people who have been put in the hospital in such situations, precisely because they were following the pedestrian traffic law. I also know people who have been ticketed for jaywalking for crossing when no cars were near and it was safe, but they didn't have a WALK light. The laws of physics don't care about Right-of-Way laws.
There are certainly bad traffic laws, but still traffic regulations are easily a net positive.
Within the general pro-market intellectual framework (soft utilitarianism with efficient markets), it's regulations which either reduce information asymmetry, correct massive imbalances of power in bargaining position or prevent the tragedy of the commons. For example:
At least a moderate level of fire safety regulation in rented residential buildings is an obvious one; for renters, getting information about what a building is built out of is difficult, and the lower end of the market tends to drastically favour landlords (there'll always be another tenant).
Bans on asbestos insulation for the same reason.
The sort of basic workplace health and safety/quality of life regulations we don't think about any more, such as not locking employees in the building, a certain amount of breaks etc. It's dubious that scrapping these would add a great deal of jobs or lead to higher wages, but if they're not there then workers at the bottom end of the labour market will suffer more than the economic benefit to anyone.
Groundwater/runoff pollution limits (taking a property-rights approach is impractical as the impact's too diffuse.
Misleading advertising (you'd be horrified at the claims, or even implications, of adverts that some people will end up believing).
Bans/restrictions on additives/adulterants in food (again, lots of people won't read the label or won't understand it, and saying "it's their fault for being stupid" doesn't seem that morally different from saying someone of someone who gets beaten up, "it's their fault for being weak").
Broadcast frequency restrictions (prevents intentional signal jamming of competitors, allows clearer signals).
Zoning (prevents destruction of neighbourhood amenity, which is ultimately a form of commons; doing it through property rights requires an intensive use of restrictive covenants which is only possible to establish by the original developer if they own the whole area).
But the market is not efficient. Not in any short period of time. (Perhaps it is over decades.)
I'm making a stronger claim than has been experimentally proven, but the weaker claim "this particular market was not efficient in this particular way at this particular time" has been repeatedly proven.
Zoning restrictions is of dubious benefit. It depends on exactly what the zoning restrictions are, but they essentially eliminated the small neighborhood stores in many places. They have also acted to make communities less walkable.
What’s the reverse of Gell-Mann amnesia? Where someone expresses an opinion or belief, and it makes you suddenly and vehemently doubt their experience and judgment, which then casts serious doubt on the wisdom of previous opinions of theirs that you might have considered neutrally?
Whatever it’s called, if it’s called anything, your opinions on zoning should give many pause about your confident, common-sense-sounding declarations on other subjects—it shows that in at least one of these subjects you don’t have enough deep subject knowledge to have considered horrific second order effects of the regulatory ‘solution.’
May the gods see fit to erect a quaint, artisan paper mill next door to your formerly-quaint single-family home located conveniently close to your place of employment and in a school district your child's favorite teacher is employed in. May they even see fit to let the market dictate the new price of your investment, should you choose to uproot regardless.
Go ahead and state your specific objections against all the remaining examples, and I will be happy to check if you made a single mistake somewhere and therefore should be ignored.
The inclusion of zoning on this otherwise-excellent list is... let's say, disputable. Like, the basic concept of preventing rich and powerful people from plunking actually-highly-disruptive uses (paper mills, coal burning power plants, freeways) right next to the neighborhoods of people with relatively little political power has been extremely useful. It has very high benefits to the people directly affected, and turns out to have _huge_ spillover positive externalities, because you get a healthier population that does more useful work in the economy and consumes less services. (Goes on Social Security disability later or not at all, uses less subsidized public health resources across their life, etc.)
Micro-level zoning, though -- stuff like minimum lot sizes, maximum densities, and so on -- was _originally conceived_ basically to let rich people keep The Poors far away from their neighborhoods, and continues to serve that purpose right up to this day. It is probably the single policy that does the most to keep us collectively poorer than we could be. (For an extreme example, there are papers studying how much more wealth was generated in parts of London that got to be rebuilt after the Blitz, compared to parts that have been frozen in amber for the last seventy years because of England's bananas historic preservation rules. One random article about that here: https://www.telegraph.co.uk/science/2018/08/06/blitz-added-45-billion-londons-annual-economy-say-experts/ )
Zoning at the level of towns has created a classic tragedy of the commons in areas that have economic growth. Think about environmental regulation. With local regulation of dumping in a river, the wealthy factory owner says to local government, "Hey if you make me stop dumping and make my factory less profitable, it's not going to make any difference, you'll still have pollution from upstream, and you'll lose a bunch of taxes; I might even move out of town entirely." You have to bump regulation up to a higher level in order for everyone to agree that the social value of ending the pollution exceeds the value of saving some short-term money for the businesses. Similarly, with zoning, every city in the Bay Area has spent the last sixty years chasing the tax revenue from adding office space, while building way too little housing for the well-paid workers who would occupy those offices. In San Mateo County, where I live, it's like an 11:1 ratio of new jobs to new housing units in the past decade or so. (Roughly six to one, in terms of offices to bedrooms.) Each City Council says, "Well, us saying no to office growth won't help the regional problem, it'll just give up on that desperately needed new property taxes, and nobody will let us add housing anyways because they freak out about parking, and changing neighborhood character." We had to escalate to state government to agree that this was all a huge mistake. Building adequate housing for the economy we actually have is necessary to keep the cost to our lower- and middle-tier workers of either renting a home in an expensive area, or commuting in 3+ hours from a cheap area, from consuming all the economic value being generated, and actually making traffic issues far worse than they would be if many more people could live close to their jobs. We need to change things so that City Councils _don't_ get to decide _how much_ housing gets built; they can have some influence over _where_ it gets built, but if they have a history of operating in bad faith around the issue they should no longer even get that. Neighbors are not pollution.
I strongly recommend M. Nolan Gray's book about zoning, Arbitrary Lines: https://islandpress.org/books/arbitrary-lines
Disagree on zoning. It's been corrupted to stop construction propping up property values for current owners while screwing over new owners
"Zoning" is more than housing density.
I would expect that not *all* pollution regulations designed to keep water and air clean/non-hazardous would pass a reasonable cost/benefit test, but many of them would. Pollution is a classic externality.
Externalities can be regulated or taxed. If the goal is to limit the total pollution within a certain space (that space might be anything from city-sized to atmosphere-sized) a tax is usually better than a regulation. For example, Factory A might produce twice as much pollution daily as Factory B but might make products in that day worth ten times more than does Factory B. In that case, a good tax could cause Factory B (and others like it) to close but Factory A to stay open because it's economical for Factory A to pay the tax. A regulation which limits the amount of pollution each factory can emit, however, can lead to the reverse outcome.
I've heard (but it's just hearsay) that there are too many regulations in the US where there should be Pigouvian taxes.
Regulation is better if some level of pollution is intolerable at a very local level.
Fair points. I seem to remember Bryan Caplan writing about Pigouvian taxes; that might be the source of said hearsay.
Pigouvian taxes are widely supported by libertarians as superior to existing measures.
Imports and exports aside, unless they're selling their alcohol on the black market, doesn't a sales tax affect them just the same?
Yes. Being old enough to remember what breathing was like in major US cities before the Clean Air Act, what a lot of our rivers and lakes had gotten to be before the Clean Water Act, etc. -- we'd all be pretty unhappy to go back now to that.
(Let alone going all the way to the USSR scenario, of heavy industrialization without _any_ limits on polluting. One of my parents traveled extensively in the former USSR during the 1990s and came back literally gasping at the accumulated environmental degradation that had been revealed.)
Food standards regulations. Making companies list ingredients, nutrient breakdowns on packages, making sure foods don't include poisons and making companies list best before dates. Things like that.
Agreed on the ingredients list and the nutrient breakdown. Mallard has a point about the sesame seed regulation. Frankly, I don't know of _any_ good solution to that one.
https://reason.com/2023/07/28/fda-commissioner-no-one-envisioned-the-consequences-of-new-sesame-seed-labeling-rule/
https://www.cato.org/blog/food-labels-kill
https://reason.com/category/culture/food/food-labeling/
https://www.cato.org/blog/food-labels-kill is really rather unfair. Yes, there was a screw-up. Yes, it killed people. But this particular screw-up (unlike many others - e.g. excessive delays in drug approvals) is not the FDA's fault.
During the 1980s, nutritionists' standard advice was to minimize fats, particularly saturated fats, and to get calories from complex carbohydrates instead. This turned out to be wrong. Frankly, nutrition science is hard. To get solid answers, one would want to do double-blinded randomized controlled studies of various food choices over the length of time it takes for food-related illnesses to develop, which can be decades. Good luck with that.
Telling consumers that a given food has X grams of protein, Y grams of carbohydrates, and Z grams of fat was perfectly legitimate, and potentially useful, information. Unfortunately, nutrition science told consumers the wrong thing to do with that information. C'est la mort.
That's a good point. I agree.
Many Thanks!
On the other hand: https://www.bbc.com/news/uk-england-leeds-67067171?at_medium=RSS&at_campaign=KARANGA
Yup! The 20th century version was substituting Firemaster for Nutrimaster https://www.woodtv.com/news/michigan/pbb-how-a-simple-shipping-error-poisoned-most-of-michigan/
edit: I should note, unlike the 19th century case, this wasn't a case of adulteration gone horribly worse. Nutrimaster is magnesium oxide, a perfectly legitimate feed supplement (analogous to human magnesium supplements). So I've drifted away from the regulation question, since it is not at all clear how any plausible regulation could have prevented this mistake.
Anyone have any success with free or paid online writing courses? Meaning helping you to launch a career in writing online (on the internet) as opposed to novels or something else. I am currently in a corporate job and really would like to start writing and explore a possible career change, but I am at a loss as to where to start or how to narrow down what to write about. There's a glut of courses and things online purporting to help people with this but I have no idea how to suss out what's worth the money and time. Thanks in advance.
Writing is a type of thinking. Write about whatever you would like to think about, or what you're already thinking about and would like to think about more. Or write about what you'd like to learn about.
Some successful internet writers seem to get their start writing comments on other people's blogs. Scott did. Then Bean got his start writing comments on Scott's blog. Deiseach is one of my favorite internet writers, and I'm pretty sure she doesn't write anything other than blog comments. So you're already off to a good start, commenting here.
Forget classes. Actually writing is more important. You already know how to write. You're already writing! This isn't like learning guitar where you need someone to teach you chords or show you how to place your hands before you can make a sound. You already have a way to get feedback, too.
Forget classes. Reading is more important. I remember Scott mentioning somewhere that he read the complete works of chesterton over and over to fully upload as much of his stylistic toolkit as possible. Stephen King once wrote that an aspiring writer needs to be reading four hours a day and writing four hours a day. I'm sure not every great writer manages that, but it's the right spirit.
Forget classes. If you get really stuck and you need tips, Google "writing tips".
Forget classes. To make money, since you're already an internet writer, first grow an audience, then convince some of them to give you money.
Thanks for your comment. I already made a pact with myself that a lot more of my day needs to be spent reading. ADHD causes me to have 5 books on the go at once, and of course the pull of checking blogs, news, etc. is very strong as well. Some structure would definitely be helpful to me, but deep down I always understood that I'll have to sit down and read, and sit down and write, if I'm ever to become a writer, and no class or special knowledge or instruction will ever replace that.
I found your comment about writing as a form of thinking very helpful, and to focus on what I like to think about, or want to think about more. I am going to set aside some quiet time today to mull it over, and write down some topics that seem to always be swirling in my brain. I think I am not in tune enough with myself and so I feel like I am in my own way, rather than having a clear understanding of where my intellectual passions are, hence the difficulty in figuring out what to write about.
I'm not sure that classes are useless because there are some things worth considering that may not be obvious. For example, how much redundancy do you need?
Having varied emotional tone helps a lot with keeping readers' interest. How do you know whether it's something you need to improve?
I don't write anything, so my advice is just as a reader.
First question; who's your favorite online writer, and what do they write about? My favorites typically write about whatever interests them at the mo