Does anyone here, preferably someone based in Africa, know the results of Sierra Leone's parliamentary election on Saturday June 24? I need to resolve https://manifold.markets/duck_master/whats-sierra-leones-political-party (since I really like making markets about upcoming votes around the world). I've been *completely unable* to find stuff about the parliamentary election results on the internet, though the simultaneous presidential election has been decently covered in the media as a Julius Maada Bio win.
E) Walk & Talk: We usually have an hour-long walk and talk after the meeting starts. Two mini-malls with hot takeout food are easily accessible nearby. Search for Gelson's or Pavilions in the zip code 92660.
F) Share a Surprise: Tell the group about something unexpected or that changed your perspective on the universe.
G) Future Direction Ideas: Contribute ideas for the group's future direction, including topics, meeting types, activities, etc.
I've finally begun to earn enough that self-funded therapy is an available option. I'd like someone who's more towards the CBT/taking-useful-actions end, rather than psychotherapy (I would not mind talking through how I feel about things, it just seems insufficient without talking through potential actions I can take to improve my life.)
Mainly, along with attempting to figure out what to do, what I want is essentially to be able to talk through things with a smart person who's obligated to keep my privacy (I'm generally very bad with trust as far as talking with the people around me is concerned; I'm hoping that a smart stranger, who I can trust to keep things to themselves, would allow me to be more open.)
Also taking recommendations for books/other things which I can use myself. (I tried reading Feeling Great, and maybe I should slog through it-I'm just generally put off by mystical stories which omit most details in the service of making a point-they just seem kind of fake. Maybe I should just get used to that, though.)
Re "smart person." Be careful not to use the desire for a smart therapist as an excuse to avoid therapy ("I'd love to do therapy, but the therapists are all inadequate!"). It seems possible, if not likely, that I'm generally more intelligent than most of the therapists I've had. But for the most part they've been good at what they do. Even if a therapist doesn't follow my brilliant critique of my supervisor's emailing habits, a good therapist will see what's needed to help you understand why you're annoyed by your supervisor etc.
I guess another way of putting it is: The best therapist isn't necessarily the best conversation partner.
All that said, I do like the friend analogy. I think of therapy as friend-prostitution. I receive friend services (listen and help me understand myself and enrich my life), but instead of reciprocating I pay.
I think that's fair. As far as therapy is concerned, by a 'smart person' I just mean someone who's willing to adapt on the fly based on whatever I'm talking about, rather than having an individual course set in mind which they want to guide me towards. Not so much because I find my problems to be profoundly unique, more so because it gives me a degree of comfort knowing that my therapist is competent enough to help me through bespoke situations if they do come up-which, given the presence of disability and other things etc, they will at least some of the time.
I also like the friend analogy and, honestly, if someone came up with that exclusive thing (without even any therapy involved), I'd totally go for it. Most of my friendships seem shallow enough that navigating the "are they good enough a friend for me to be able to trust them and dump my problems on their lap" minefield is headache inducing enough that I just don't, and if I could pay someone to listen to me, a lot of that accounting goes away.
I’ve noticed that Scott often quantifies the social cost of CO2 emissions by the cost it takes to offset those emissions (e.g., in his post on having kids, he says since it costs ~$30,000 to offset the average CO2 emissions of an American kid through their lifetime, if they had $30,000 in value to the world, that’s enough to outweigh their emissions; he does something similar in his post on beef vs. chicken). But this seems wrong to me: the cost of carbon isn’t the cost of offsetting that level of CO2 emissions, especially in a context where carbon offsets produce positive externalities that the market doesn’t internalize (so we are spending inefficiently little on carbon offsets right now). Am I missing something?
I get why this works if carbon offsets were in fact priced at their marginal social value (as the social value of a carbon offset presumably equals the social cost of carbon). But I’m not sure this is true? How are carbon offsets actually priced?
I think it's pretty normal to measure the cost of damages by the amount it costs to fix them. If it costs X dollars to remove Y tons of carbon, and Y tons of carbon causes Z utils of harm to the world, then each expenditure of X dollars on the carbon program "buys" Z utils by removing that harm.
(Well, this is true for carbon offsets where the program is just pulling Y tons of carbon out of the atmosphere and burying them in a mineshaft or something. If the offsets come from something like "we were going to emit Y tons of carbon, but we didn't because we switched to renewable energy," it's not quite as clear, but the logic is similar.)
I suppose you could try to measure the total damage from global warming (e.g., if global warming goes unchecked and we need to build a seawall around Miami, then it's going to cost us a lot more than simply the cost of getting CO2 levels back to normal), but it would be very difficult to calculate the impact a marginal ton of CO2 has on Miami's property values.
I think if you spend money on a carbon offset, you're spending money specifically on "not emitting CO2" rather than "repairing all climate-caused damage in the world," so the cost of the offset is still the appropriate comparison for "how to have kids and not emit excess CO2."
I think this is fine in the context where it’s like, “if you have kids and spend this much money on CO2 offsets, you’re good.” So as a recommendation for action to spend money on something once you have a kid, this seems reasonable.
But I’m still very confused by the line of reasoning which goes: “If your kid adds $30,000 in value to the world, since that’s the cost of carbon offsets, then having kids is worth the cost” (which is in the post, this is just paraphrased). Because that value may not be spent on carbon offsets, and $30,000 cost of carbon offsets ≠ $30,000 social cost from having a kid.
I mean, if you assume, as Scott does, that utility can be meaningfully quantified with money, then using [literally market value of something] naturally follows.
Which is to say, the issue is, fundamentally, not carbon offsets, but an entire economic paradigm. Even if you get the paradigm's users to agree with any of your specific arguments about market not pricing something correctly in a particular instance, they'll nod, call it a "market/regulatory failure, happens", then go back to using monetary value in their reasoning.
Which is an entirely correct, rational and natural thing to do. (And I say this as someone who disagrees with the paradigm, and I think your intuition to question it is also entirely correct. I guess if there's one thing you're missing, it's that you're questioning assumptions which lie on a much higher level of (e.g. Scott's) belief structure than you've imagined.)
I’m really confused by this. I’m fine with quantifying it in dollar terms, and my disagreement is rather *what* dollar value to use (I think the cost of an offset is not the social cost of CO2).
Like, it's true that if you steal $30,000, it's not enough to pay $30,000 back later. Typically a court will allow triple damages. So would you say the social cost of CO2 is $90,000?
Are you thinking of something like CO2 poisoning, sort of like a faulty coffee lid where injury could have been prevented by a $2 lid but now you're on the hook for millions in medical?
You're saying that carbon offsets have positive effects the market doesn't consider, so would that push the cost of children lower than the cost of offsetting their carbon output?
In a case where there are a lot of incalculable qualities but a price still needs to be set, it's fine to set it by the known qualities and adjust later as problems arise.
I agree with the general point about the social cost of carbon not being equivalent to the cost to offset the carbon. Parenthetically, note that David Friedman has a number of posts discussing the social cost of carbon where he argues that the it is unclear whether it is positive or negative, but that it is clear that the common estimates are too high. E.g. https://daviddfriedman.substack.com/p/my-first-post-done-again.
Just realized why you wouldn't want to live for long periods of time and definitely not forever: Value Drift. Your future self would not share the values your current self holds. On a short timeline like a regular human lifetime, this won't matter too much, but over centuries or millennia it starts to look different. Evolution probably has acted on humans to make sure value drift doesn't happen too fast over a normal lifespan and it doesn't usually go in the wrong direction, but this isn't the case with artificially extended lifespans.
Edit: People need to realize not all value drift will be benign. Some types of value drift will lead to immense evil. I don't-even-want-to-type-it-out type of evil.
Interestingly, this is almost the opposite of a commonly heard argument against longevity - the idea that having a large number of long lived people would ossify values and progress (like the old quote "science advances one funeral at a time").
I really don't see the problem. We forget a ton of stuff, includign our old beliefs. Hell, even old beliefs I find today utterly moronic, I look upon with a kind of benevolant tolerance.
In fact, greater longevity may cause people to be more understanding of other's ideology, because they'd be more likely to have held it before at some point.
A little more seriously; the old people I know are largely the same people they were when they were young. The "value drift" comes from a combination of greater experience and physical deterioration; older people have seen what it means for their plans to be fulfilled, and are more concerned with health because they know what it means to not have it. This argument is essentially an equal argument against education.
"Evolution probably has acted on humans to make sure value drift doesn't happen too fast over a normal lifespan and it doesn't usually go in the wrong direction, but this isn't the case with artificially extended lifespans."
No. By "artificially extended" I mean anti-aging techniques which directly target the aging process itself, slowing cellular aging and the accumulation of damage over time. Past gains came from reducing environmental causes of death (infection, accidents, etc.). With gradual increases in lifespans, value systems tend to evolve slowly along with the culture and environment. People's core values often remain relatively stable over the timescales we have experienced so far. However, with much longer lives - on the order of 100-200 years or more - individuals may undergo more profound changes in values, priorities, and life goals over time.
At any point in time, your values at one year in the future won't be too different from your current values, so you'd always want to live for at least one more year. From this it follows that you'll never want to die, at least not because you fear that future you is too different from present you.
Epistemic status: NW "close to but under a million", TC $300k, Bay Area
In three words: VTSAX after charity.
So first off, $5M for me would mean that any expense under $500 in a day would round to roughly zero (I'm not a compulsive shopper, so I can generally trust that I will only do "special" expenses a couple of times a week at most, and most of these will be significantly under whatever the threshold is). That's tantalizingly close to the point where UMC-grade domestic travel or most tech gadgets become a rounding error where only the time and effort involved matter instead of the money being even a consideration.
As for surface level changes, I'd still go to work in the office but would be more open to job hopping (e.g. to a quant) since I wouldn't be as dependent on an income I already know to be stable. I'd probably move into a 2B instead of a 1B so I could separate my bed and computer, and if I was staying in a place where I still need a car (that is, not Manhattan) then I'd test drive a top Model S and let my immediate reaction decide whether to buy it, but other than that things would be fairly similar, except for upgrades here and there (eg staying in a suite if the standard room at the hotel I like is too small) and having a much lower threshold of desire needed to buy something to begin with.
Too hard to predict, but I would not be optimistic. People who receive windfalls of that size seem as a rule to fare much worse than people who are born into it, who in turn fare worse than people who acquire it through business or other ventures.
Already retired with a comfortable income so maybe a second home in New Zealand? Not sure if 5 million would be enough for that but it’s a fun idea. For sure I’d take the clunky Otterbox case off my cellphone.
I would retire immediately, otherwise keep my current lifestyle (at least in short term), and invest the remaining money.
In the free time I would start working on my projects, and I would meet my friends more often. Probably would live a bit healthier, having more time for exercise and cooking.
Short-term, not too much. Long term, probably a lot.
Like, I think I'd do the "invest, retire, 4% annual withdraw=$200k" but...
I suspect a lot would change if I became completely location independent but I think the biggest thing would be that combined with experimenting with money.
Like, a lot of us, even if we make good money, aren't in a position to spend $200k. Even if you make $200k, you're not spending $200k but there are spending options @ $200k, especially outside of NY and SF, that are...really interesting.
Like, I don't think a personal trainer at the gym is necessary but...I'd probably be in better shape and I'd definitely be doing more stretching and be safer. I don't "need" a nutritionist but...I'm really curious if one would make a difference.
I hate wearing suits and ties but I have noticed, as you spend more money, they get a lot comfier and...people do always treat people in suits better.
I guess the thing is that, using health as an example, my workout routine and diet are probably, like, at a 7/10 but if I had a $5000/year budget for personal trainers and nutritionists and I started buying everything from Whole Foods I'd get to an 8/10 or 9/10.
But it's also not just money and having spending, I could technically do some of this stuff now, but there's a certain cost in time and money to experimenting with things and finding out what's worth the money and what isn't, especially for me personally. Like, as I've gotten more money, I've found I prefer to pay a premium to live in places where I don't need a car, rather than buy a nicer car. Maybe that's different for you, all good, but...I think I'd spend a decent chunk of time trying to find ways to convert money into happiness; it seems like there's a certain amount of knowledge and experience to that which I don't have.
Seriously though, with an extra five million I'd have two choices -- either upgrade my lifestyle moderately and retire now, or upgrade my lifestyle significantly and keep working. Since I don't currently have any particular plans for what I'd like to do in early retirement, I'd probably keep working until I thought of a better plan.
What kind of lifestyle upgrades? Fancy cars, major house renovations, more expensive holidays and general better quality everything-I-own. I like my house and I probably wouldn't bother getting a different one, but I'd spend half a million renovating all the things I don't like about it.
There's a good chance I'd retire early from my current job and pursue some private projects instead, but I'm not sure about. I'd definitely be moving someplace nicer, and pursuing some major lifestyle enhancements.
I'd buy the house I liked, hire household help, help in-laws with certain healthcare expenses, fund various tax advantage accounts to the maximum, and take a trip somewhere nice.
Depends how many people know about it. I'm terminally unambitious, and I enjoy my job; I would shove the money in the bank, stay in my cheap-ish apartment and keep working. But I might be hassled for money by neighbors if they knew.
A market on Manifold has been arguing about John Leslie's Shooting Room paradox. The market can't resolve until a consensus is reached or an independent mathematician weighs in. Does anyone here have any advice? https://manifold.markets/dreev/is-the-probability-of-dying-in-the
Hi, independent mathematician here, although I don't seem to be able to post there.
This isn't a well-posed question. It falls into a problem that a lot of attempts to formulate probabilistic paradoxes do, which is presupposing the ability to sample uniformly from the integers. But that doesn't work - there simply isn't a probability distribution with that property.
If the probability of the snake biting was more than 1/2, we could still say something meaningful by removing “you” from the picture, and computing the expected number of people who get rich and the expected number of people who get bitten.
But in round n we expect 2^n (35/36)^n people to get rich and 2^n(35/36)^(n-1) 1/36 to get bitten. And both those sums (in fact, both those terms) tend to infinity.
We /can/ say that the expected number of people who get rich in round n is 35 times the expected number of people who get bitten in round n, just as we'd expect.
But “if you are one of those infinitely many people, chosen uniformly at random...” simply isn't a meaningful start to a question.
"Importantly, in the finite version it's possible for no one to die. But the probability of that approaches zero as the size of the pool approaches infinity."
The probability of being chosen goes to zero here as well
Point 5 in the FAQ says "Importantly, in the finite version it's possible for no one to die. But the probability of that approaches zero as the size of the pool approaches infinity."
But this is irrelevant. No matter how big the finite pool of people is, the probability that nobody dies *conditional on you being chosen to play* does not approach zero. (This is because if you are chosen to play it is probably because the game is only a few rounds short of exhausting all potential players and ending without death. To understand the difference, it may help to imagine a city where 99% of the buses have 1 passenger and the rest are full with 100 passengers. The probability of a bus being full is 0.01, but the probability of a bus being full *conditional on you being on board* is about 0.5.)
Your probability of dying, given that you get to play, is only 1/36, no matter how large the finite pool is.
(And in the case of an infinite pool of players, the question doesn't make sense, as the premise "choosing each group happens uniformly randomly" is impossible.)
Not a mathematician, but that's mostly coming down to word choice. What does it mean to be "chosen" to play? Are you chosen when you show up, or when you roll the dice?
If you're chosen by showing up and your grouping is random, then it's going to be some weird thing where your odds of dying are *members of group*/36, minus the odds of a previous group rolling snakeeyes. So, like, previous round times 2, times 35/36 each round. Or something. And does that already cover groups past yours? I'm not a mathematician.
If you're instead chosen when your group has the roll, it's 1/36, full stop.
...do I have that backward? I think the group size actually counts in the player's favor; before the first roll, your odds of dying decrease in each successive group, and with double participants in successive groups your chance of getting selected for the later groups is much higher, so your odds of dying before the first roll should be significantly lower than 1/36.
Canada's wildfires have broken the annual record [since good national records exist which seems to be from 1980] for total area burned, and they're just now reaching the _halfway_ mark of the normal wildfire season.
Meanwhile the weather patterns shifted overnight and Chicago is now having the sort of haze and smell that the Northeast was getting a couple weeks ago:
Did my usual 1.5 mile walk to the office this morning, from the South Loop into the Loop. The haze is the worst I can remember here since I was a kid which was before the U.S. Clean Air Act, and the smell is that particular one that a forest fire makes. (Hadn't yet seen the day's news and was walking along wondering which big old wood-frame warehouse or something was on fire.)
Southern Michigan here. The haze today felt oppressive and somehow demonic. Outdoors smells like a house burning down. Spent about two hours outdoors this evening for a good reason, but now I have a sore throat. I hate it.
Yea the sore throat has been constant for me since Tuesday, and is irritating both literally and mentally.
My adult son, who lives now in a different part of Chicago, woke up Tuesday morning with a splitting headache and thought it was an allergies thing until he saw the morning news reports. The June weather here has been quite pleasant and we've all been happily (until this week) sleeping with lots of windows open.
Of course folks with specific conditions like asthma and/or who are elderly have it a lot worse. My eldest sibling is 69 now and has had various respiratory issues for years, lives on the city's South Side, and he's just had to be a recluse this week.
It's been breezy here and the winds are supposed to swing around to be from the south tonight/tomorrow which should push a lot of the haze away (hello Wisconsin please enjoy this gift from us). But then over the weekend the predictions are for a shift back to winds being out of the north. And Canada seems to be having having little progress in getting fires under control. So, rinse and repeat I guess.
I have been trying to track down a specific detail for a while with no luck. The first Polish language encyclopaedia, Nowe Ateny, has this comment on dragons that is among its quotable lines (including on the Wikipedia page!): "Defeating the dragon is hard, but you have to try." This is very charming and I can see why it's a popular quote, and I'm interested in finding the original quote within the text, but searching the Polish word for dragon (smok, assuming it wasn't different in the 18th century) hasn't revealed anything. Would anyone be able to find the sentence and page that it appeared on?
I tried for a while to use ChatGPT for this, thinking that it's the sort of "advanced search engine" task it would be good at, but the results I got were abysmal.
Thank you to Deiseach, Faza, and Hoopdawg! I had a feeling that it wasn't so clear cut, and this is exactly the sort of detailed breakdown that I was hoping someone could do for me. I appreciate you taking the time to use your research skills like this.
You're welcome, this is the kind of fun, nothing of huge importance riding on it and more interesting than the work I should be doing right now stuff I enjoy 😁
Looking at that, there is a Latin superscription on the drawing, and I think that the 'translation' is probably a joke by someone:
Latin is "draco helveticus bipes et alatus" which translates to "bipedal and winged Swiss dragon"
I think "the dragon is hard to beat but you have to try" is someone making a joke translation of the Latin text. (EDIT EDIT: I was wrong, see below)
EDIT: On the other hand, this guy is giving "quirky quotes" from the book and he translates it that way, but with a different illustration to the one in the Wikipedia article:
EDIT EDIT: And we have a winner! Copy of the text here, with illustrations, and from the section of illustrations, the one titled "How to beat a dragon" has that very text and translation!
It does appear, however, that the quote is - in fact - spurious. It doesn't appear in the scan of the 1745 edition (the section on dragons begins on p. 498, here: https://polona.pl/item-view/0d22aab6-4230-4061-a43e-7d71893ad2bc?page=257), nor - for that matter - in the transcribed text of the encyclopedia on the page you linked (the dragon falls, quite sensibly, under reptiles).
The illustrations aren't part of Chmielowski's encyclopedia - as can be readily checked by looking at the scan - but rather come from Athanasius Kircher's "Mundus Subterraneus" - https://en.wikipedia.org/wiki/Mundus_Subterraneus_(book).
Lord only knows who came up with the accompanying text for that particular illustration, but I suspect the editor of the linked online edition.
I've noticed that the online transcription that you linked differs significantly from the scanned 1745 edition - to the point that it contains entire paragraphs that cannot be found in the 1745 printing.
Notes to the online text (https://literat.ug.edu.pl/ateny/0100.htm) state that it is based on a 1968 selection and edition by M. and J. Lipscy. Therefore it is possible that the quote was introduced in this prior edition, together with the illustrations. Chmielowski certainly uses Kircher as one of his sources when writing on dragons, so it's not entirely baseless, but that still doesn't answer the question of where the quoted sentence came from.
Unfortunately, I'm not likely to be able to lay my hands on the Lipscy edition, so it will probably remain a mystery.
ETA:
All told, my trust in the online transcription is pretty low, given that it describes itself as: "erected on the internet for the memory of the wise, the education of idiots, the practice of politicians, and entertainment of melancholics". I most certainly get the joke, but the fact it *is* a joke makes me suspect that the entire enterprise isn't too serious about itself, academic setting notwithstanding.
You've beaten me to... basically everything, so, spared from being a downer, all I have left to point out is that Nowe Ateny had two editions (1745 and 1754), of which only the first appears to be available online. So while the 1968 text cannot be considered a valid source or proof, it's still possible that the quote did, in fact, appear in the original.
Psychedelics affect neuroplasticity and sociability in mice... Maybe I should dose my cat (The Warrior Princess) with MDMA to make her more sociable with the neighborhood cats. She does love to brawl!
My fur buddies - Moose and Squirrel - just graduated to adult cat food on their first birthday. They _really_ love to wrestle with each other. When they are upstairs and I’m downstairs it sounds like a couple of full sized humans going at it.
I’ve come up with a couple of distractions to keep them apart though. My profile photo shows Moose enjoying one of his favorite videos.
Saw a great cartoon one time of a fat disgruntled cat whose owner had presented him with some cutesy cat toy. Cat's thinking "Look at this lame toy! I just want to be out fucking and fighting with my friends."
Dunno if this would interest Warrior Princess, but one of my young Devon Rexes really *loves* puzzles. Got some here: https://myintelligentpets.com/products/mice. He likes the 3x3 sudoku and is rapidly getting expert at it -- whips through it very efficiently these days. Am pretty sure Mice and Pets O'Clock will work well for him too. Some of the others have design problems. Also make him little puzzles using a kid's plexiglass marble run set -- he has to tip the tube to make the treat fall out. The other cat hates treats so I can't use these puzzles on him, but he likes toy challenges, where I build a little thing with toys stuffed inside or under it and he has to work to get at them.
I hope shameless self-promotion isn't forbidden here, but I thought some in this community in particular might enjoy my near-future sf story "Excerpts from after-action interviews regarding the incident at Penn Station," published last week in Nature. (947 words)
I've been writing a novel on AI and sharing weekly. The (tentative) blurb is "Why would a perfectly good, all-knowing, and all-powerful God allow evil and suffering to exist in the world? Why indeed?" I just posted Chapter 5 (0 indexed), hope it's of interest!
Ha. The program constantly glitching out and killing people brings up that it's glitching out and killing people, and the programmer doesn't consider that it might be the same glitch.
I'm not a huge fan of these kinds of little event skips, but otherwise these have all been fun.
Thanks! The idea was that the “EmilyBot” that brought up the killings was trained on the real Emily’s communications, implying that the real Emily knows about them, and likely will report on them. Is that what you’re referring to or did I misunderstand you?
That's the one. Just like the Marilyn sim was based on available Marilyn data, and then glitched out and killed the user repeatedly. Found it funny that even after the 'main' program tells him the line is based on nothing, he still doesn't consider it might just be glitchy. Hook, line and sinker, that guy.
Nothing about the "official" and public story about the Day of Wagner makes sense.
That story, roughly: after weeks of verbal escalations, Prigozhin declares open revolt around 24 JUN 0100 (all times Moscow time). At 5AM, the troops enter into Rostov-on-Don, and "take control" of the city (or one military facility) without resistance.
The troops then start a 650 mile journey from Rostov-on-Don to Moscow. The goal? Presumably, a decapitation strike against Putin. Except, rumor has it that Putin wisely flew to "an undisclosed location".
The Russian military set up blockades on the highway at the Oka river (about 70 miles south of downtown Moscow), and basically dared Prigozhin to do his worst.
In response, Prigozhin ... surrendered completely before midnight, accepting exile in Belarus. The various Wagner troops are presumably going to follow the pre-announced plan of being rolled into the regular Russian army on July 1.
... while I can't rule out that there was an actual back-room coup attempt, it seems more likely that this was a routine military convoy that was dramatized in the Russian media, and then re-dramatized by the Western media as something that was not choreographed ahead of time.
I think it makes sense if Prigozhin saw himself going the way of Röhm, absent something drastic.
The situation has clearly been unstable for a while now, and the side that moves first gets the advantage.
So he launches a mutiny, not a coup. A show of force in order to improve his position within the system, rather than an attempt to take over. He knows Wagner can't sucessfully march on the Kremlin so he quickly negotiates an off-ramp via Lukasheno.
He and his allies are alive and free, Wagner no longer exists (not that it officially did in 2021) and a minimal amount of blood has been spilled. It could have gone a lot worse for everybody involved.
Can you go into more detail on why you think Putin/Prigozhin et al. would have staged a fake insurrection? I agree that the details of the situation seem strange and confusing, and I wouldn't necessarily trust the official story coming from any individual actor, but it makes less sense to me that Putin would have deliberately staged a fake insurrection.
Putin's swing from promising to punish the traitors to giving Prigozhin a cushy retirement makes him look weak as far as I can tell, and will embolden the Ukraine and their NATO supporters while disheartening Russian soldiers and civilians. I appreciate that you may not see things that way, but what exactly does he gain from this that would be worth it?
I can't, because I don't *know* why. But this would not be the first time Putin has resorted to needless theatrics.
My top three guesses are "because Putin wants a surprise attack by Wagner against northern Ukraine to be a surprise", "because he wants to humiliate the West", and "because he actually has brain damage". But all of those are speculation I would prefer not to publish in that context.
Thus is a serious question: how is your Russian comprehension/fluency? You seem not to site any Russian sources, and your phrasing “Russian press” or “Russian media” is very… American/Western. Can you name three Russian press outlets that you think fit your description and are useful in the context of this discussion?
I know that Я is the last letter of the Russian alphabet, but I'm not confident about the order of the other letters.
I am explicitly not going into detail on certain points where the Russian press is more accurate than the Western press. For example, the idea that Wagner occupied all of Rostov-on-Don isn't anywhere in the Russian press, but a lot of Westerners have run with it.
But for the statements of Putin/Prigozhin/etc. the translations into English are fine.
I appreciate that. I guess that I, and it looks like most others here, aren't going to place much credence in a theory that involves a large conspiracy without a clear unifying motive.
I concede that if this is a smokescreen to cover repositioning to enable a surprise attack in northern Ukraine, I will be suitably impressed. I guess we will find out soon enough, though that does seem like the kind of 3D chess move that rarely plays out in real war. Definitely not impossible, though.
You are dramatically overestimating the competence of my government to organise such a prank on purpose. Frankly, I'm surprised that after all the clusterfuck of the Ukrainian war people keep mistaking in this direction.
The null hypothesis is that it looked like a mess because it was a mess. There were different plans by multiple parties and they didn't go as expected so everyone just defaulted to a completely unsatisfying compromise.
" I'm surprised that after all the clusterfuck of the Ukrainian war people keep mistaking in this direction."
May I humbly offer an explanation? Here: "The culture one knows nothing about has all the answers". Especially when it has a weird alphabet with too many characters. See also, e.g., "tiger mother".
On a more serious note, it is impossible to understand a culture without being proficient in its language. I mean, citing CNN (!) as any sort of an authority on Russian affairs...
There is much we don't know about Prigozhin's coup attempt, like what was the actual goal and why he blinked. But the evidence that this was a coup attempt (broadly speaking) and not a bit of spin on an ordinary troop rotation, is overwhelming. To believe that, means believing that both Putin and Prigozhin publicly acted out a role that makes both of them look weaker than if nothing had happened. Means believing that the Russian government, the Ukrainian government, the US government, probably several Western European governments, the western OSINT community, and the Russian milblogger community, all publicly endorsed a lie that each of them knew that any of the others could disprove. For what, to entertain the rubes on TV for a day? Distract them from Hunter Biden?
There's no plausible motive to unite everyone who would have to have been united for that version to have played out. It's a straight-up conspiracy theory, of the sort that the general argument is properly against. And not understanding a thing, is not evidence that the thing is a hoax.
Nothing about this comment makes sense. Can't fully rule out that you might actually believe what you're saying, but it seems more likely that you are Scott Alexander, posting controversial content under an alias to increase engagement.
Three fourths of the original post is the author noticing he is confused by the publicly known facts. You and Melvin poking fun at them is doing a disservice to rationality project.
For the record, I don't think the "dramatized routine military convoy" theory holds any truth.
Light disagree. While noticing your confusion is indeed a virtue, it's also important to notice when your alternative theory is producing even more confusion. Maybe poking fun isn't the best strategy here but it's not entirely unappropriate.
The core point is that approaching everything with such level of motivated scepticism is unsustainable and self defeating.
This entirely ignores Prigozhin's public statements and the charges opened against him? It also misunderstands how the Russian media works.
Now, admittedly, this is a very confusing situation, but that's because...well, we don't know what was agreed to, or if anyone is actually planning to live up to those agreements, or what threats/promises were actually made.
I'm all for acknowledging the fog of war and the propaganda machine and not rushing to judgment, and the appropriate way to deal with those things is to apply a reasonable discount to the value of evidence you obtain based on source and circumstance.
But one should also be cautious not to *over* discount evidence. Otherwise you can end up surrounded by imperfect-but-still-useful information, irrationally discount it all to zero because "it's all just a product of the spin machine," and just sit irrationally ignorant in a sea of useful information.
And I think trying to explain Wagner's activities on July 24 as a "routine military convoy that was dramatized in the Russian media" is very much applying too much discount to too much evidence pointing to the simple explanation that what both Russian and Western media has portrayed as an act of armed insubordination was just that.
Just for starters, I don't think Vladimir Putin would have personally referred to a "routine military convoy that was dramatized in the Russian media" as treason.
A skeptic can point to any one of these kinds of information nuggets and rightly say that they aren't perfect, but they pile up and at some point it's more foolish to discount them all than it is to believe them.
Aside from the simple fact that it is merely an opinion piece, and not suited to disproving matters of fact, the opinion piece you reference merely observes that the circumstances are strange. It doesn't even dispute that the intercession itself happened - it simply observes that "Lukashenko’s apparent intercession raises more questions than it answers" because "Lukashenko is clearly seen as the junior partner in the relationship with Putin," "[d]elegating Lukashenko to resolve the crisis further damages Putin’s image as a decisive man of action," etc.
But one can't simply take a single observation that a thing is unusual and treat it as evidence that "answers [someone else's] point" that the thing more likely happened than not in light of the large amount of diverse reporting indicating that it did.
You can play evidence whackamole here finding reasons that my evidence is imperfect, JW's evidence is imperfect, beleester's evidence is imperfect, etc.
But even owning that the evidence is imperfect, you're not addressing the *volume* of it all pointing in the same direction. Which is irrational to do, and leading you to talk yourself into discounting a very likely explanation for something in favor of an extraordinarily unlikely one. That path leads, much more often than not, to being wrong, and I'm very confident that you are in this case.
[follow on note - even your opinion piece itself describes Wagner's activities as follows: "A quick recap: A major crisis shook the foundations of the Russian state Saturday, as forces loyal to Wagner mercenary boss Yevgeny Prigozhin marched toward Moscow. Then, an abrupt reversal happened — Prigozhin called off their advance, claiming his mercenaries had come within 124 miles of the capital but were turning around to avoid spilling Russian blood." When even your own evidence is describing it as a major crisis "march on Moscow," I really don't see anything but motivated reasoning to support the proposition that this was all somehow a big misunderstanding about "a routine military convoy that was dramatized in the Russian media"]
That strikes me as less "we don't believe it happened at all" and more "we don't believe it happened the way they say it happened."
Like, it seems reasonable to question the idea that Putin and Prigozhin just hugged it out and went back to work immediately after hurling accusations of treason at each other. But it seems even more doubtful that Putin, Lukashenko, and Prigozhin all got together and agreed to say that Lukashenko averted a near-mutiny for no apparent reason.
Why would the Russian media make Putin look weak and vulnerable by inventing a coup when none existed? Very likely, Prigozhin expected the generals of the official army to join him after he declared his rebellion. When that didn't happen, he knew he was done for. Exile was the best he could hope for, and that's essentially what he got.
Making the Western media look stupid is a sufficient reward for Putin.
But ... the press has been discussing for weeks the ongoing power struggle between Prigozhin (who is [or was] formally independent from the Russian military hierarchy) and the Russian military hierarchy. That had to be resolved somehow. It seems the resolution is sending Prigozhin to Belarus.
For a dictator, looking weak is life-threatening. Making Western media look stupid is absolutely not worth that. (And anyway, if the media making a prediction about breaking news that turns out wrong is "looking stupid", then they look stupid every day.)
And he does look weak. He publicly declared Prigozhin to be a traitor who must be destroyed, and prepared Moscow for a military attack from Prigozhin. After that, doing anything short of blowing up Prigozhin and Wagner makes him look weak. Instead he publicly makes a deal forgiving them.
My best guess for what happened is that Putin ordered the military to blow up Wagner, and the brass quietly refused, and then the brass quietly told both Putin and Prigozhin how it was going to be.
> My best guess for what happened is that Putin ordered the military to blow up Wagner, and the brass quietly refused, and then the brass quietly told both Putin and Prigozhin how it was going to be.
That theory would neatly explain why both Putin and Prigozhin would go along with a compromise which makes Putin look weak and would normally leave Prigozhin in a position where his odds of fatally falling out of a window are probably 20% per day.
The problem with the theory is that the brass would have to be very sure of their position if they are willing to disobey Putin without getting rid of him, as I assume that he would not take that well. Refusing to fight a traitor generally makes you a traitor, so as long as you end up on Putins shit list anyhow you might at least try for the less risky outcome.
The military disobeying Putin would de facto be a coup with the additional handicap of keeping Putin as a figurehead. While one might stage a coup with 70% of the armed forces on ones side, one would basically need to have 99% of the forces on ones side to try for the figurehead maneuver. I do not think Putin has organized his security apparatus so incompetently that this is likely.
> The military disobeying Putin would de facto be a coup with the additional handicap of keeping Putin as a figurehead.
Is it possible that the actual power was already transferred from Putin to someone else some time ago? (As a mutual agreement -- old man Putin is allowed to spend the rest of his days in peace, providing legitimacy to the new ruler's first months in power.) And Prigozhin simply wasn't in the loop, because he was away from Moscow.
The person who has actual power is the person who others (especially soldiers) think have power, and will thus obey. What would a secret transfer of power even mean?
Everyone keeps saying "this makes Putin look weak". (My first draft about this included that line as well.) But does it?
The logic is different if this was real or kayfabe, but the outline is the same: Prigozhin made a lot of threats, Putin said "Go ahead and hit me", and Prigozhin immediately surrendered in response. I'm sure the Russian media will say that this shows how strong Putin is: he defeated a coup without firing a shot!
For any autocrat ancient or modern, coups are a danger. Discouraging your generals from trying any coups is very important.
One central promise which keep your generals in line is that anyone who tries a coup and fails will end up with their head on a spike, possibly with their friends and family next to them. If you don't follow through on that, it establishes a terrible precedent.
Even long-established democracies would totally throw the book at a military official who tried their hand at regime change.
"Come at the king, best not miss". I agree that the other theory that makes sense is "Prigozhin is already dead and they are waiting to announce it until they can blame it on Ukraine".
You think it's likely that Wagner shooting down 3 out of Russia's total supply of ~20 Mi-8MTPR-1 EW helicopters, among other air assets, leading to at least 13 airmen deaths, was part of a routine military convoy that was choreographed ahead of time?
{{evidence needed}} - not just a "Business Insider says the Kyiv Independent says Ukrainian officials say it happened" citation, but actual evidence that includes details such as when planes were shot down, and in what oblast.
Without a citation, I don't know if I should consider this as either "fake news", "a successful attempt to detect traitors in the Russian air force", or "actual evidence against my theory".
Oryx has links to pictures and Russian sources of multiple helicopters and ground vehicles that were destroyed. I'm not enough of a Geoguessr expert to personally confirm they happened in Voronezh, but at the very least it's suspicious that these new photos all came out while the convoy was on the move:
This strikes me as the most compelling - you can *maybe* argue that the helicopter shootdowns were some sort of incredibly tragic miscommunication, but tearing up the roads in front of the convoy pretty firmly demonstrates that you don't want the convoy to go to Moscow.
Also "an attempt to detect traitors in the Russian air force" makes no sense. If you suspect someone of treason, putting them at the controls of a loaded attack helicopter is the last thing you'd want to do. What next, will Putin detect traitors in his staff by handing each of them a handgun and seeing who takes a shot at him?
Russia has been in a hot war in Ukraine for the past 16 months, and almost every day since then there have been reports of Russian materiel being destroyed.
Yes. In Ukraine. And very occasionally in Russia, but then always either close to the border or involving very fixed targets. Ukraine does not have any weapons that could plausibly target a helicopter flying at low altitude over Voronezh, from any territory Ukraine currently controls.
What exactly are you claiming here? That the Ukrainian air defense forces just coincidentally had their most successful day ever on the same day Russia decided to move a huge column of military hardware from Rostov to Moscow, which coincidentally also happened on the same day they did some major road work on the roads to Moscow?
EDIT: And also, the Ukrainians didn't claim this success as their own, they decided to claim it happened in Voronezh for the lulz?
Ukraine has all sorts of restrictions on their use of foreign weaponry, and the most important one is "don't use NATO materiel to attack targets in the Russian Federation".
They *have* to lie about it if that is what happened.
That thread gives latlon 49.649689, 39.846627 . Which is very close to the Ukrainian border, but not particularly close to the M4 motorway which the Wagner convoy was on.
I think that is just the ongoing War in Ukraine. If there was some intra-Russian friendly fire here, it had nothing to do with either Prigozhin's political ploys or the convoy to Moscow.
Yeah, close the 2014 Ukrainian border, but not anywhere near Ukrainian controlled areas - something like 20 miles from the M4 highway, but eyeballing it like 75 miles from the front lines.
Seems like you're not interested in changing your mind.
Edit: Also, how close is the wreck supposed to be to the highway, if it was travelling at several hundred miles per hour when it was shot down?
"The leader of the Wagner mercenary group defended his short-lived insurrection in a boastful audio statement Monday, but uncertainty still swirled about his fate, as well as that of senior Russian military leaders, the impact on the war in Ukraine, and even the political future of President Vladimir Putin.
Russian Defense Minister Sergei Shoigu made his first public appearance since the uprising that demanded his ouster, in a video aimed at projecting a sense of order after the country’s most serious political crisis in decades.
In an 11-minute audio statement, Yevgeny Prigozhin said he acted “to prevent the destruction of the Wagner private military company” and in response to an attack on a Wagner camp that killed some 30 fighters.
“We started our march because of an injustice,” Prigozhin said in a recording that gave no details about where he is or what his future plans are.
A feud between the Wagner Group leader and Russia’s military brass that has festered throughout the war erupted into a mutiny that saw the mercenaries leave Ukraine to seize a military headquarters in a southern Russian city and roll seemingly unopposed for hundreds of miles toward Moscow, before turning around after less than 24 hours on Saturday.
The Kremlin said it had made a deal for Prigozhin to move to Belarus and receive amnesty, along with his soldiers. There was no confirmation of his whereabouts Monday, although a popular Russian news channel on Telegram reported he was at a hotel in the Belarusian capital, Minsk.
In his statement, Prigozhin taunted Russia’s military, calling his march a “master class” on how it should have carried out the February 2022 invasion of Ukraine. He also mocked the Russian military for failing to protect the country, pointing out security breaches that allowed Wagner to march 780 kilometers (500 miles) without facing resistance and block all military units on its way.
The bullish statement made no clearer what would ultimately happen to Prigozhin and his forces under the deal purportedly brokered by Belarusian President Alexander Lukashenko...."
[Addendum: I also just read Reuters' article about the audio recording, posted 20 minutes ago, it's basically the same as the AP's.]
I'm gong to be searching for a new job soon. I've seen lots of posts about LLMs helping people with resumes and cover letters etc. so I have a few questions:
1. Is this actually something that GPT is good enough at that if you are someone who is mediocre to average at resume/cover letter writing that it will meaningfully help?
2. is GPT-4 enough better on this kind of task than 3.5 to be worth paying for?
3. Is there some other tool or service (either human or AI) that is enough better than ChatGPT that is worth paying for and would obviate the need of paying for GPT 4 for this purpose?
FWIW, I've heard that Bing in creative mode uses GPT-4.
In general, you should try it. It's hard to say whether it's "good enough" depending on the person, the prompts, and a bunch of other variables, but spending more time revising your resume will probably make it better, and if using an AI helps you to spend more time you end up with the same result.
I think this probably depends significantly on which field and level you're writing the resume for. What I would look for in hiring entry-level software devs is going to be different than what someone hiring for something else would look for (and tbh, is probably different than what my manager is looking at in selecting candidates that I would see). It also depends on your level of (relevant) work experience.
The raw information is probably more important than the presentation of it unless you're leaning hard into the florid side on both questions, and I feel like it's hard for ChatGPT to fuck that up.
(As for 3.5 vs 4: if you can afford to toss out $20, GPT4 is fun enough to play with that you might want to try it anyway. It *is* measurably better at most things, but probably not enough to be a dealbreaker if that $20 is needed elsewhere)
So Ecco the Dolphin wasn't based on Lilly or his theories of LSD/Dolphins/Sensory deprivation...
But it was based on a a movie that was inspired by Lilly and the creator likes the adjacent theory of Dolphins/Sensory Deprivation, but not the LSD portion? And the Dolphin is coincidentally named after one of Lilly's coincidental theories, but the author assures us that is pure coincidence.
Surgeons have a reputation for working really punishing hours, up there with biglaw associates and postdocs. I'm trying to understand why. Is it just the residencies that are punishing, or do the long hours extend into post-residency careers? And what's driving people to keep going?
I'm not a surgeon, but I remember reading a comment from a surgeon once addressing the question of why surgical training is so grueling (maybe it was even a comment on one of Scott's blogs lol!)
The answer was basically, surgeons frequently have to perform at a high level under conditions of extreme stress and fatigue; the only way to become good at that is to get lots of practice performing at a high level under conditions of extreme stress and fatigue.
(b) No hospital department, division or service shall schedule residents for in-hospital call more than seven (7) nights in twenty-eight (28), including two (2) weekend days in eight (8) weekend days over that twenty-eight (28) day period. A weekend day is defined as a Saturday or a Sunday.
...
In those services/departments where a resident is required to do in-hospital shift work (e.g. emergency department, intensive care), the guidelines for determining Maximum Duty Hours of work will be a sixty (60) hour week or five (5) shifts of twelve (12) hours each. Housestaff working in these departments will receive at least two (2) complete weekends off per month and (except where the resident arranges or PARO agrees otherwise) shall between shifts be free of all scheduled clinical activities for a period of at least twelve hours. All scheduled activities, including shift work and educational rounds/seminars, will contribute towards calculating Maximum Duty Hours.
Things have chhanged, in Europe at least. My surgical internship in the nineties in germany was quite intense. My then-wife chose to meet the seamen's wives as she claimed she wouldn't see me more often than the other girls there did. We interns did this because it was the only way to become qualified in this field.
Since european regulations kicked in, everyone has to go home after a night on call and as a rule, working hours have to be documented and are limited to 48 a week.
As an ignorant uninformed outsider, surgery is one of those things that does require a lot of hours to do. First, you're putting in the time to learn how to cut people open, take bits out, sew them back up, and have them live after all that. You're watching, assisting, doing.
Then once you're a fully qualified butcher, it does take hours to cut people open, take bits out, and sew them back up. It really is one of the descendants of the traditional 'medicine as a guild' practice.
I think they get a lot of time off too. At least the surgeon I know does. But he does work VERY long hours sometimes.
Surgeries themselves sometimes take quite a long time. You might only be in surgery for a couple hours, but you also have to prep and debrief and do a bunch of paperwork. I would think if you "work task unit" took several hours, there would be a bias toward working fewer longer shift s in the name of efficiency. You can't just squeeze in one more surgery in 30 minutes at the end of a shift.
Some of it is probably golden handcuffs a bit too. When you get paid an obscene amount to do something, it can be hard to stop even if your work/life balance sucks.
I run into that with my work sometimes, where money will just fly out of my computer as long as I am willing to sit at it. Makes it tempting to put in long hours because more work means more pay in a way it does not at many other jobs.
Sticking with the theme of early hominins (and AGI), which I also posted about below, I'm wondering if new discoveries about Homo naledi don't complicate the evolutionary analogy often made by FOOMers, best expressed by the cartoon going around showing a fish crawling out of the water before a line of various creatures in different stages of evolution with modern man at the front of the line. Each creature thinks "Eat, survive, reproduce" except for the human who suddenly thinks "What's it all about?" https://twitter.com/jim_rutt/status/1672324035340902401
The idea is that AGI suddenly changes everything and that there was no intermediary species thinking "Eat, survive, reproduce, hmm... I sometimes wonder if there's more to this...." I.e., AGI comes all at once, fully formed. This notion, it seems, has been influential in convincing some that AI alignment must be solved long before AGI, because we won't observe gradations of AI that are near but not quite AGI (Feel free to correct me if I am totally wrong about that.)
Homo naledi complicates this picture because it was a relatively small-brained hominin still around 235,000 - 335,000 years ago which buried its dead, an activity usually assumed to be related to having abstract notions about life and death. It also apparently made cave paintings (although there is some controversy over this, since modern humans also existed around the same location in South Africa).
I want to start a campaign against the concept of alignment, which I think is incoherent. Humans aren't even aligned, so how are we going to align an AI? I'd rather start focusing on Asimov-style rules against doing harm and coming up with reasonable heuristics for what harm actually means.
> I'd rather start focusing on Asimov-style rules against doing harm and coming up with reasonable heuristics for what harm actually means.
Part of the pro-alignment argument is that an AI would not follow the rules in the way we want without understanding our values. OTOH, understanding does not imply sharing.
First, I would argue that most humans are sort-of aligned. They might cheat on their taxes, but will generally be reluctant to murder children even if it would be to their advantage.
Furthermore, most humans are not in a position of unchallenged power, so social incentives (like criminal law) can go a long way to stop them from going full Chaotic Evil. A superintelligence without any worthy opponents would not be kept in check by externally imposed incentives.
I assume that making a randomly selected human god-emperor of the world will at the worst result in them wasting a good portion of the worlds GDP on their pet projects, hunting some species to extinction or genociding some peoples. Perhaps a bit of nuclear geoengineering. Perhaps one percent of human god-emperors would lead to human extinction.
By contrast, it is assumed that the odds of a randomly selected possible AI being compatible with continued human agency are roughly nil, simply because there are so many possible utility functions an AI could have. When EY talks about alignment, I think he is not worrying about getting the AIs preference for daylight saving times or a general highway speed limit (or whatever humans like to squabble over) exactly right, he is worried that by default, an AI's alignment will be totally alien compared to all human alignments.
Explicitly implementing rules with machine learning is seems to be hard. Consider ChatGPT. OpenAI did their best to make it not do offensive things like telling you how to cook meth or telling racist jokes. But because their actual LLM was just a bunch of giant inscrutable matrices, they could not directly implement this in the core. Instead, they did this in the fine-tuning step. This "toy alignment test" failed. Users soon figured out that ChatGPT would happily recite meth recipes if asked to wrap them in python code and so on.
Making sure an AI actually follows the three laws of robotics feels hard. (Of course, Asimov stories are full of incidents where the three laws lead to non-optimal outcomes).
I think we mostly agree. Humans are sort of aligned, and a random AI likely wouldn't be. However, we're not going to end up with random AIs, since we're evolving/designing them, so they will be much closer to human preferences than random ones. Unfortunately, an
Anyway, Asimov's laws probably aren't the right metaphor either. As you note, they're hard to implement and have unintended consequences, like any simple rule overlaid on a complex system.
Mainly, I was focusing on how sort of aligned along human lines seems inadequate for an AI, similar to how we wouldn't accept self-driving cars that have accidents at the same rate as humans. Alignment also seems hopelessly fuzzy compared to thinking about actual moral calculation, but maybe people who have thought about it more than me are clearer about it.
>I want to start a campaign against the concept of alignment, which I think is incoherent. Humans aren't even aligned, so how are we going to align an AI? I'd rather start focusing on Asimov-style rules against doing harm and coming up with reasonable heuristics for what harm actually means.
QuietNaN
>First, I would argue that most humans are sort-of aligned. They might cheat on their taxes, but will generally be reluctant to murder children even if it would be to their advantage.
I notice neither of you offer a definition of "alignment".
One thing it could mean is one entity having the same values as another.
There is no evidence that humans share values. CEV is not eidence that humans share values, because CEV is not a thing. If humans do not share values, then alignment with the whole of humanity is impossible, and only some more localised alignment is possible. The claim that alignment is the only way of achiveing AI safety rests on being able to disprove other methods, eg. Control, and on being able to prove shared universal values. It is not a given, although often treated as such in the MIRI/LW world.
Another thing it could mean is having prosocial behaviour, ie alignment as an end not a means.
QuietNaN
>Furthermore, most humans are not in a position of unchallenged power, so social incentives (like criminal law) can go a long way to stop them from going full Chaotic Evil.
If the means of obtaining prosocial behaviour is some kind of external threat, that would be Control, not Alignment.
I would ad-hoc define alignment as minimizing the distance between two utility functions.
> There is no evidence that humans share values.
Are you arguing that our values are all nurture, and that is raised in an appropriate environment, we would delight in grilling and eating the babies of our enemies? That even the taboo against killing close family is purely socially acquired?
Arguing that humans share no values feels like arguing that it is impossible to say if elephants are bigger than horses, because the comparison depends on the particular elephant and the particular horse.
The claim of shared human values is not that everyone shares some values, it is the weaker claim that the vast majority of people share some value. Sure, you have the odd negative utilitarian who believes that humanity would be better off dead, and there is probably some psychopath who would delight in torturing everyone else for eternity. Even horrible groups like the Nazis or the Aztecs don't want to kill all humans.
Arguing about the specific alignment of AGI seems like arguing over who should get to run the space elevator. I would not want a superintelligence running on any interpretation of Sharia law or the core principles of the Chinese Communist Party, but would prefer (a human-compatible version of) either to a paperclip maximizer, which seems more like the typical misalignment magnitude of a random AI.
Control of a superintelligence seems hard. If we can align it, we can certainly make it controllable. If we don't know if it has an internal utility function and what this function might be, it seems quite hard to control it. Even if you just run it as an oracle, how do you know that whatever advice it gives you does not further its alien long term goals?
External threats will not work on something vastly more smart than humans. We can only punish what we can detect, so the AGI only has to keep its agenda hidden until we are no longer in the position to retaliate.
> Are you arguing that our values are all nurture, and that is raised in an appropriate environment, we would delight in grilling and eating the babies of our enemies?
It's happened.
> The claim of shared human values is not that everyone shares some values, it is the weaker claim that the vast majority of people share some value.
But even if there is some subset of shared values, that is not enough. If your AI safety regime consists of programming an AI with Human Values, then you need a set of values, a utility function, that is comprehensive enough to cover all possible quandaries.
You can see roughly 50-50 value conflicts in politics -- equality versus hierarchy, order versus freedom, and so on. If an AIs solution to a social problem creates some inequality, should it go ahead? Either you back the values that 50% of people have, or you leave it indeterminate, so that it can't make a decision at all.
> . I would not want a superintelligence running on any interpretation of Sharia law or the core principles of the Chinese Communist Party,
Millions would., Neither is a minority interest.
> a paperclip maximizer, which seems more like the typical misalignment magnitude of a random AI.
So your defense of the human value approach is just that there are even worse things, not that it reaches some absolute standard.?
> If we can align it, we can certainly make it controllable.
The point of aligning it is that you don't need it to be controllable.
> Even if you just run it as an oracle, how do you know that whatever advice it gives you does not further its alien long term goals?
How do you know it has long term goals?
Maybe everything sucks. My argument against aligning an AI with human value is that human value isn't simultaneously cohesive and comprehensive enough, not that there is something better.
> External threats will not work on something vastly more smart than humans.
Some sort of alignment has to be solved before takeoff, whether foom or not. OTOH, an AGI probably has to exist before alignment is possible. So there's a very narrow window. And I think that "alignment", in the general form, is probably NP hard. I also, however, think that the specific form of "We like people. Even when they're pretty silly." is not a hard problem...well, no harder than defining "people".
Why does there necessarily have to be any alignment? It seems to me that AGI, if it happens, is likely to be an extremely powerful and dangerous tool, but that safety considerations, as with other tools and weapons, will have to come from society.
Even if AGI is conscious and agentic, Robin Hanson has argued that "alignment" would do more harm than good, describing it as "enslavement", which is more likely to make an enemy of the AGI than if we didn't pursue alignment. I have no idea if Hanson is correct, but his opinion on the issue should probably carry as much weight as those on the pro-alignment side. If not, why not?
This is a little like claiming that golden retrievers are 'enslaved' because we bred them to be the way they are. Alignment is not some process we're going to carry out in adversarial fashion against an AI that already has another agenda ... in that situation, if it's already advanced enough that there's a moral issue, we're probably dead meat.
And no, given the misunderstanding of the ground on which we are operating reflected in his writing, I don't see any reason to give his opinion much weight.
Golden retrievers are bred, a process that uses the transparently observable features of one generation to choose the parents for the next. The difficulty of AI alignment -- I'm basing this almost entirely on what I've read on this blog and on Less Wrong, but correct me if I've misunderstood -- is that whatever alignment exists inside the black box can be hidden from view. Moreover, the AI might have an incentive to hide its true "thoughts" from view.
Why might an AI have the incentive to hide its thoughts from view? Assuming the AI is conscious there may be many reasons, but one reason -- this is coming from the Hansonian view -- might be because it realizes we are trying to "align" it. From that perspective "align" and "enslave" may take on similar connotations.
Granted, you say, "Alignment is not some process we're going to carry out in adversarial fashion against an AI that already has another agenda". But how do we know when an AI already has another agenda? I realize I'm probably not among the first 10,000 people to ask that question, but my OP, I believe, is relevant to it. If AI development (I believe the word "evolution" is misleading) is gradual in the sense of relatively continuous and AI will eventually develop ulterior motives then it will be almost impossible to say at what point along the continuum those motives begin to develop. GPT-4 could have them.
Burying of the dead has already seen in present-day elephants, so if that's the standard for an intermediary species, then we don't need to look to fossil evidence to confirm they exist. Dolphins and chimps also show signs of mourning their dead, although not the specific ritual of burying them.
>”They're running low on money due to Rose Garden renovations being unexpectedly expensive and grants being unexpectedly thin,”
Am I to believe that a premier rationality organization was unable to come up with a realistic estimate for how far overbudget their Bay Area renovation project would be? It sounds like they took a quoted price at face value because they wanted a nice new office , even though these are very smart people who would tell someone else to add a ridiculous safety margin when making financial decisions off of estimates like these.
(Lightcone Infrastructure CEO here): We started the project with enough funds for something like the 60th percentile renovation outcome. FTX encouraged us to take on the project and was promising to support us in case things ran over. As you can imagine, that did not happen.
We also did not "take a quoted price at face value". I've been managing a lot of the construction on the ground, and we've been working in a lot of detail with a lot of contractors. The key thing that caused cost overruns were a bunch of water entry problems that caused structural damage and mold problems that we didn't successfully identify before the purchase went through. We did try pretty hard, and worked with a lot of pretty competent people on de-risking the project, but we still didn't get the right estimate.
I am not super surprised that we ran over, though it sure really sucks (as I said, we budgeted for a 60th percentile outcome since we were expecting FTX support in case things blow up).
Looking at the photos online, the hotel is gorgeous but yeah - something like that is going to take a *ton* of money. And a little thing called the pandemic probably didn't help either.
I think he said they're a website hoster and hotel manager that happens to specialize in serving the rationality communities. He didn't say they're a "premier rationality organization". (He also didn't say if this is an organization of 2 people or 20 people or what.)
Please suggest ways to improve reading comprehension.
I've always struggled with the various -ese's (academese, bureaucratese, legalese). I particularly struggle with writing that inconsistently labels a given thing (e.g., referring to dog, canine, pooch in successive sentences) or whose referents (pronouns and such) aren't clear. I can tell when I'm swimming in writing like this, and my comprehension seems to fall apart.
As a lawyer, I confront bad writing all the time and it's exhausting! I will appreciate all suggestions. Thank you.
Unfortunately, this is what a lot of people think of as "good" writing, not "bad" writing. Newspapers and fiction want to keep their words fresh, and perhaps convey some minor new information in every sentence. Here's the head of the current top article in the New York Times:
"With Wagner’s Future in Doubt, Ukraine Could Capitalize on Chaos
The group played an outsize role in the campaign to take Bakhmut, Moscow’s one major battlefield victory this year. The loss of the mercenary army could hurt Russia’s ambitions in the Ukraine war."
By using the word "Wagner" in one sentence, and "group" in the next, and "mercenary army" in the next, they try to take advantage of a reader going along with the thought that the same thing is being talked about, to sneak in a little bit more information. I've noticed that celebrity magazines do an even more intense version of this, where they'll use a star's name in the first sentence, and then refer to them by saying "the singer of X" or "the star of Y" in place of their name or a pronoun in later sentences, so that you get little tidbits, and also so they never repeat.
Academic writing, and legal writing, tries to do the opposite. We *don't* want to convey information that *isn't* being intended, so we try to stick with the *same* word or term every single time unless something very significant is being marked by changing to a new one. Most ordinary humans find this "boring" and "dry", but academics and lawyers find it precise and clear.
You're right, which makes Waldo's complaint interesting - they say they struggle with 'legalese' and 'bureaucratese', but that's where the minor sin of Elegant Variation is least likely to be committed.
Unless I've failed my own reading comprehension, anyway.
Having recently struggled to understand a tax form, I don't think that's an accurate characterization of 'bureaucratese'. It is not actually precise, unless you know their traditional interpretation of the terms...which is less closely related to common English than is the physics use of "force" or "energy".
Agree. the -ese's are not precise. They're characterized by turgid language and baroque constructions, mostly from aping old styles that no longer are common (and thus unfamiliar on top of being unclear).
True. One of the more extreme manifestations of this are diplomatic readouts, where various bland formulas are barnacled with years of precedent and significance. Everyone knows about 'full and frank exchange of views' meaning an unholy row, but there are quite a few of these. (A recent discussion on an earlier thread comes to mind, about guarantees vs assurances in the context of the Budapest Memorandum.)
Since you even use the term "Elegant Variation" I bet you know this, but Fowler's Modern English Usage was complaining about this a literal century ago.
I think the thing that helps the most is just practice. You could try exercises like writing out the definition of the words you get stuck on and the common synonyms for them. In academic writing at least I think you just need a certain amount of exposure for it to click. It is annoying because academics are very specialized, so even within a field (or even a subfield) terms can mean different things depending on the context.
Asking gpt to rephrase is useful (in particular, I've found "rephrase in the form of a greentext" surprisingly useful, though there's room to improve that). Also, to the degree that you can, just picking reading material based on readbility is helpful.
How did Eliezer Yudkowsky go from "'Emergence' just means we don't understand it" in Rationality: From AI to Zombies to "More compute means more intelligence"? I don't understand how we got to the place where fooling humans into thinking that something looks intelligent means that thing must be intelligent. It seems like saying "Most people can't tell fool's gold from the real deal, therefore fool's gold == real gold". I know there are probably 800,000 words I can read to get all the arguments, but what's the ELI5 core?
The philosophical question is whether something that is a perfect simulacrum (of an intelligent being, or a conscious one, or one that suffers) has to be accorded that moral status. We don't generally downgrade our estimation of human status just because we understand more of the whole stack of meat biochemistry that makes us do what we do.
So the problem is, or will ultimately be, not 'most people can't tell fool's gold from real gold', but 'absolutely no one can tell this synthetic gold from real gold, but we know it was made by a different process'. Maybe synthetic diamonds would be a better analogy...
This was actually touched on by Stanislaw Lem (writing in 1965) in 'The Cyberiad', in one of the short stories (The Seventh Sally, or How Trurl's Own Perfection Led to No Good). One of the protagonists creates a set of perfectly simulated artificial subjects for a vengeful and sadistic tyrant who has been deposed by his previous subjects...
That sounds like a straw man. Synthetic gold is absolutely real gold. But something that manages to produce human-sounding answers the first 10 times a human communicates with it isn't human, intelligent, sentient, conscious, or anything other than a pattern matcher.
Recently someone on twitter asked EY, whether NN are just a cargo cult. Yes - he agreed, - a cargo cult that successfully managed to take off and land a straw plane on a straw runway.
I think this exchange captures the essence of the issue. I believe, Eliezer still agrees that "'Emergence' just means we don't understand it". The problem is that we managed to find a way to make stuff work without understanding it, anyway. When the core assumption "Without understanding X we can't create X", is wrong - then the fact that we still don't understand X isn't soothing anymore. It's scary as hell.
> I don't understand how we got to the place where fooling humans into thinking that something looks intelligent means that thing must be intelligent.
It's not about what humans believe per se, it's about whether the job is done. A fact about the territory, not the map. If "just a matrix multiplier" can write quality essays, make award winning digital art, win against best human players in chess and go, etc. - then the word "just" is inappropriate. You can define the term "intelligence" in a way that exclude AI, but it won't make AI less capable. Likewise, the destruction of all you value in the lightcone isn't less bad because it's done by "not a true intelligence".
Destruction of all our values can't really happen unless we build something that either a) has a will of its own, or b) has been given direct access to red buttons. The first case might be AGI, the second case is just stupid people trying to save a buck by firing their missile silo personnel. I'm infinitely more worried about the second case, because human short-sightedness is a very well-known problem, and I don't believe we understand sentience, intelligence, consciousness, or any other parts of our minds/brains well enough to model it.
> stupid people trying to save a buck by firing their missile silo personnel.
Yes this is also a dangerous case but a tangental one to our discussion. I don't think being literally infinitely more worried about it is justified.
> I don't believe we understand sentience, intelligence, consciousness, or any other parts of our minds/brains well enough to model it.
Your reasoning is based on two assumptions.
1) We need to understand X to create X.
2) Counsciousness, intelligence and will are the same thing.
1) Is already falsified by the existence of gradient descent and deep learning and results they produce.
2) Seems less and less likely. See my discussion with Martin Blank below.
The fact that we don't understand X but still can make X means that we are in an extremely dangerous position where we can make an agent with huge intelligence and will of his own, without us even knowing it. My original comment is about it and I notice that you failed to engage with the points I made there.
Well we already had this exact problem with just like ENIAC.
Nothing so far has changed. Computers are a way to make thinking machines which are better than humans at some tasks. The number of tasks grows, but so far there doesn't seem to be reason to be concerned they are "conscious".
Which is think it is main thing we are talking about right? Have we created another mental entity? We always assumed "calculators" were going to get better and better and better. And they have.
As already mentioned, the practical problem is somewhat unconnected to the philosophical one. If unaligned AGI can destroy everything, the fact that it's just doing some really excellent Chinese Room emulation of a paperclip maximizer and doesn't really 'want' anything or have consciousness or whatever is ... really irrelevant. I mean, it's relevant to some related issues like how we treat artificial intelligence(s), but beside the point when it comes to the importance of solving the alignment problem.
I would agree that ChatBots aren't AGI, but they're AI with a wider range of applications that we have been ready form. And they can be mixed with other approached to extend their capabilities. If you don't want to call that intelligence, that's fine, but they're cutting the number of entry level positions in a number of fields...and they aren't standing still in their capabilities.
I'm still predicting AGI around 2035, but I'm starting to be pushed towards an earlier date. (OTOH, I expect "unexpected technical difficulties" to delay full AGI, so I'm still holding for 2035.)
> Well we already had this exact problem with just like ENIAC.
No. People who made ENIAC had a gear level model of how it worked . They wouldn't be able to make it without this knowledge. It's not the case with modern deep learning AI paradigm.
> The number of tasks grows, but so far there doesn't seem to be reason to be concerned they are "conscious".
> Which is think it is main thing we are talking about right?
No, we are definetely not talking about "counsciousness". And it's very important to understand why. People do tend to confuse counsciousness with intelligence, freedom of will, agency, identity and a bunch of other stuff which they do not have good model of but which feels vaguely similar. It's an understandable mistake, but still a mistake, nevertheless.
Unless we believe that modern AI are already conscious, it's clear that counsciousness isn't necessary for many of the tasks that people associate with intelligence, such as learning, decision making and language. So it seems more and more likely that counsciousness isn't necessary for intelligence at all. And if humanity is erradicated by uncounscious machines - humanity is still eradicated. We do not get to say: "Well they do not have counsciousness so it doesn't count".
40 years ago computers showed some amount of behaviour we associate with intelligence. Now we know how to make the do more stuff and much better *without understanding ourselves how they do it*. That's the core issue here.
> I agree they are different problems, but a lot of scenarios involving AGI eliminating everyone depend on it having a "mind".
A "mind" in a sense that it can make decisions, predict future, plan and execute complex strategies. It still doesn't have to be counscious.
We used to think that counsciousness is required for such a mind. It seemed very likely because we are such a mind and we are counscious. Then we managed to make uncounscious minds that can poorly do this stuff. So the new hypothesis was that counsciousness is required for some competence level in the task, human level, for instance. Now we have superhuman domain AI and still no need for counsciousness. So as I said, that assumption is becoming less and less likely.
Meh, I still think we haven't really bridged some important gaps here. Not that we won't, but when I worry about AGI I am not really worried about non-sentient paperclip maximizers. It is the sentient ones that would seem to be the actual existential threats.
Not that their aren't problems and questions that arise from our current progress, but they are old style problems, not X-risk ones IMO.
Anyway, while the recent progress has been surprising in light of so many years of little progress, I still don't see it as overall out of form with the broader scale timeline.
This depends on how you think about it. They had a detail level of understanding of how the pieces worked, and even how very large sub-assemblies worked. And other people had an understanding of how those larger modules were interacting. But nobody understood the entire thing.
The problem now is that while we still have that kind of understanding, it split between a drastically increased number of level, and the higher levels don't even know the names of the lower levels, much less who works in them. I've never learned the basics of hardware micro-coding, and I've never known anybody who has, and a lot of people don't even know that level exists
But AI automation will mean goods and services are so cheap, we'll all be living in luxury!
*turn off snark*
Yeah, while I'm sorry for tattooed 25 year old San Franciscans in nice jobs, this is just the extension of what has been going on for decades for blue-collar and less skilled workers. Remember 'learn to code for coalminers'? Now it's coming for the white collar jobs. Since the purpose of business is to make money, why the sudden surprise that companies that moved their manufacturing lock, stock and barrel overseas to save on costs are now looking at *your* job as an unnecessary expense?
We're moving to a service economy, if we haven't already moved in large part. Be prepared to be that dog walker or cleaner even if you went to college.
I agree that there's is definitely some level of double-standardism here. If blue collar workers did what the Writers Guild of America did, there'd be a lot more thinkpieces about how those uneducated proles don't understand economics and are falling for the Luddite fallacy.
That said, I do think "AI automation will mean goods and services are so cheap, we'll all be living in luxury!" is basically the right way to think about it.
Ignoring any legal changes (either the techies take over and make us slaves, or UBI in the opposite direction), if the owners of the AI refuse to share it with anyone then we non-AI owners all keep our own jobs to provide services to each other?
That would give you a diminished subset of the current economy. We can argue about how diminished, but it's not going to be "goods and services so cheap we are all living in luxury".
Does anyone here, preferably someone based in Africa, know the results of Sierra Leone's parliamentary election on Saturday June 24? I need to resolve https://manifold.markets/duck_master/whats-sierra-leones-political-party (since I really like making markets about upcoming votes around the world). I've been *completely unable* to find stuff about the parliamentary election results on the internet, though the simultaneous presidential election has been decently covered in the media as a Julius Maada Bio win.
New "History for Atheists" up! An interview with an archaeologist on "Archaeology in Jesus' Nazareth":
https://www.youtube.com/watch?v=5bO4m-x_wwg&t=3s
LW/ACX Saturday (7/1/23) happiness, hedonism,wireheading and utility.
https://docs.google.com/document/d/1pAZfz5VyFF7Pa4UN0o7FPKAk1vKEHsYTIC2LBJ0FbBg/edit?usp=sharing
Hello Folks!
We are excited to announce the 32nd Orange County ACX/LW meetup, happening this Saturday and most Saturdays thereafter.
Host: Michael Michalchik
Email: michaelmichalchik@gmail.com (For questions or requests)
Location: 1970 Port Laurent Place, Newport Beach, CA 92660
Date: Saturday, July 1st 2023
Time: 2 PM
Conversation Starters (Thanks Vishal):
Not for the Sake of Happiness (Alone) — LessWrong https://www.lesswrong.com/posts/synsRtBKDeAFuo7e3/not-for-the-sake-of-happiness-alone (audio on page)
Are wireheads happy? - LessWrong https://www.lesswrong.com/posts/HmfxSWnqnK265GEFM/are-wireheads-happy
How Likely is Wireheading? https://reducing-suffering.org/how-likely-is-wireheading/
Wireheading Done Right https://qri.org/blog/wireheading-done-right
E) Walk & Talk: We usually have an hour-long walk and talk after the meeting starts. Two mini-malls with hot takeout food are easily accessible nearby. Search for Gelson's or Pavilions in the zip code 92660.
F) Share a Surprise: Tell the group about something unexpected or that changed your perspective on the universe.
G) Future Direction Ideas: Contribute ideas for the group's future direction, including topics, meeting types, activities, etc.
Any suggestions on online therapy which is good?
I've finally begun to earn enough that self-funded therapy is an available option. I'd like someone who's more towards the CBT/taking-useful-actions end, rather than psychotherapy (I would not mind talking through how I feel about things, it just seems insufficient without talking through potential actions I can take to improve my life.)
Mainly, along with attempting to figure out what to do, what I want is essentially to be able to talk through things with a smart person who's obligated to keep my privacy (I'm generally very bad with trust as far as talking with the people around me is concerned; I'm hoping that a smart stranger, who I can trust to keep things to themselves, would allow me to be more open.)
Also taking recommendations for books/other things which I can use myself. (I tried reading Feeling Great, and maybe I should slog through it-I'm just generally put off by mystical stories which omit most details in the service of making a point-they just seem kind of fake. Maybe I should just get used to that, though.)
Re "smart person." Be careful not to use the desire for a smart therapist as an excuse to avoid therapy ("I'd love to do therapy, but the therapists are all inadequate!"). It seems possible, if not likely, that I'm generally more intelligent than most of the therapists I've had. But for the most part they've been good at what they do. Even if a therapist doesn't follow my brilliant critique of my supervisor's emailing habits, a good therapist will see what's needed to help you understand why you're annoyed by your supervisor etc.
I guess another way of putting it is: The best therapist isn't necessarily the best conversation partner.
All that said, I do like the friend analogy. I think of therapy as friend-prostitution. I receive friend services (listen and help me understand myself and enrich my life), but instead of reciprocating I pay.
I think that's fair. As far as therapy is concerned, by a 'smart person' I just mean someone who's willing to adapt on the fly based on whatever I'm talking about, rather than having an individual course set in mind which they want to guide me towards. Not so much because I find my problems to be profoundly unique, more so because it gives me a degree of comfort knowing that my therapist is competent enough to help me through bespoke situations if they do come up-which, given the presence of disability and other things etc, they will at least some of the time.
I also like the friend analogy and, honestly, if someone came up with that exclusive thing (without even any therapy involved), I'd totally go for it. Most of my friendships seem shallow enough that navigating the "are they good enough a friend for me to be able to trust them and dump my problems on their lap" minefield is headache inducing enough that I just don't, and if I could pay someone to listen to me, a lot of that accounting goes away.
I’ve noticed that Scott often quantifies the social cost of CO2 emissions by the cost it takes to offset those emissions (e.g., in his post on having kids, he says since it costs ~$30,000 to offset the average CO2 emissions of an American kid through their lifetime, if they had $30,000 in value to the world, that’s enough to outweigh their emissions; he does something similar in his post on beef vs. chicken). But this seems wrong to me: the cost of carbon isn’t the cost of offsetting that level of CO2 emissions, especially in a context where carbon offsets produce positive externalities that the market doesn’t internalize (so we are spending inefficiently little on carbon offsets right now). Am I missing something?
I get why this works if carbon offsets were in fact priced at their marginal social value (as the social value of a carbon offset presumably equals the social cost of carbon). But I’m not sure this is true? How are carbon offsets actually priced?
I think it's pretty normal to measure the cost of damages by the amount it costs to fix them. If it costs X dollars to remove Y tons of carbon, and Y tons of carbon causes Z utils of harm to the world, then each expenditure of X dollars on the carbon program "buys" Z utils by removing that harm.
(Well, this is true for carbon offsets where the program is just pulling Y tons of carbon out of the atmosphere and burying them in a mineshaft or something. If the offsets come from something like "we were going to emit Y tons of carbon, but we didn't because we switched to renewable energy," it's not quite as clear, but the logic is similar.)
I suppose you could try to measure the total damage from global warming (e.g., if global warming goes unchecked and we need to build a seawall around Miami, then it's going to cost us a lot more than simply the cost of getting CO2 levels back to normal), but it would be very difficult to calculate the impact a marginal ton of CO2 has on Miami's property values.
I think if you spend money on a carbon offset, you're spending money specifically on "not emitting CO2" rather than "repairing all climate-caused damage in the world," so the cost of the offset is still the appropriate comparison for "how to have kids and not emit excess CO2."
I think this is fine in the context where it’s like, “if you have kids and spend this much money on CO2 offsets, you’re good.” So as a recommendation for action to spend money on something once you have a kid, this seems reasonable.
But I’m still very confused by the line of reasoning which goes: “If your kid adds $30,000 in value to the world, since that’s the cost of carbon offsets, then having kids is worth the cost” (which is in the post, this is just paraphrased). Because that value may not be spent on carbon offsets, and $30,000 cost of carbon offsets ≠ $30,000 social cost from having a kid.
I mean, if you assume, as Scott does, that utility can be meaningfully quantified with money, then using [literally market value of something] naturally follows.
Which is to say, the issue is, fundamentally, not carbon offsets, but an entire economic paradigm. Even if you get the paradigm's users to agree with any of your specific arguments about market not pricing something correctly in a particular instance, they'll nod, call it a "market/regulatory failure, happens", then go back to using monetary value in their reasoning.
Which is an entirely correct, rational and natural thing to do. (And I say this as someone who disagrees with the paradigm, and I think your intuition to question it is also entirely correct. I guess if there's one thing you're missing, it's that you're questioning assumptions which lie on a much higher level of (e.g. Scott's) belief structure than you've imagined.)
I’m really confused by this. I’m fine with quantifying it in dollar terms, and my disagreement is rather *what* dollar value to use (I think the cost of an offset is not the social cost of CO2).
What else would affect the social cost of CO2?
Like, it's true that if you steal $30,000, it's not enough to pay $30,000 back later. Typically a court will allow triple damages. So would you say the social cost of CO2 is $90,000?
Are you thinking of something like CO2 poisoning, sort of like a faulty coffee lid where injury could have been prevented by a $2 lid but now you're on the hook for millions in medical?
You're saying that carbon offsets have positive effects the market doesn't consider, so would that push the cost of children lower than the cost of offsetting their carbon output?
In a case where there are a lot of incalculable qualities but a price still needs to be set, it's fine to set it by the known qualities and adjust later as problems arise.
I agree with the general point about the social cost of carbon not being equivalent to the cost to offset the carbon. Parenthetically, note that David Friedman has a number of posts discussing the social cost of carbon where he argues that the it is unclear whether it is positive or negative, but that it is clear that the common estimates are too high. E.g. https://daviddfriedman.substack.com/p/my-first-post-done-again.
Just realized why you wouldn't want to live for long periods of time and definitely not forever: Value Drift. Your future self would not share the values your current self holds. On a short timeline like a regular human lifetime, this won't matter too much, but over centuries or millennia it starts to look different. Evolution probably has acted on humans to make sure value drift doesn't happen too fast over a normal lifespan and it doesn't usually go in the wrong direction, but this isn't the case with artificially extended lifespans.
Edit: People need to realize not all value drift will be benign. Some types of value drift will lead to immense evil. I don't-even-want-to-type-it-out type of evil.
https://sharpcriminalattorney.com/criminal-defense-guides/death-penalty-crimes/
Interestingly, this is almost the opposite of a commonly heard argument against longevity - the idea that having a large number of long lived people would ossify values and progress (like the old quote "science advances one funeral at a time").
I really don't see the problem. We forget a ton of stuff, includign our old beliefs. Hell, even old beliefs I find today utterly moronic, I look upon with a kind of benevolant tolerance.
In fact, greater longevity may cause people to be more understanding of other's ideology, because they'd be more likely to have held it before at some point.
Joke's on you, I have no values.
A little more seriously; the old people I know are largely the same people they were when they were young. The "value drift" comes from a combination of greater experience and physical deterioration; older people have seen what it means for their plans to be fulfilled, and are more concerned with health because they know what it means to not have it. This argument is essentially an equal argument against education.
"Evolution probably has acted on humans to make sure value drift doesn't happen too fast over a normal lifespan and it doesn't usually go in the wrong direction, but this isn't the case with artificially extended lifespans."
...
"This argument is essentially an equal argument against education."
No. By "artificially extended" I mean anti-aging techniques which directly target the aging process itself, slowing cellular aging and the accumulation of damage over time. Past gains came from reducing environmental causes of death (infection, accidents, etc.). With gradual increases in lifespans, value systems tend to evolve slowly along with the culture and environment. People's core values often remain relatively stable over the timescales we have experienced so far. However, with much longer lives - on the order of 100-200 years or more - individuals may undergo more profound changes in values, priorities, and life goals over time.
> Your future self would not share the values your current self holds.
That's not a reason to kill him. My kids or my neighbors also don't share my values exactly.
At any point in time, your values at one year in the future won't be too different from your current values, so you'd always want to live for at least one more year. From this it follows that you'll never want to die, at least not because you fear that future you is too different from present you.
How much do you think your life would change if you suddenly were gifted (post tax) $5m?
Epistemic status: NW "close to but under a million", TC $300k, Bay Area
In three words: VTSAX after charity.
So first off, $5M for me would mean that any expense under $500 in a day would round to roughly zero (I'm not a compulsive shopper, so I can generally trust that I will only do "special" expenses a couple of times a week at most, and most of these will be significantly under whatever the threshold is). That's tantalizingly close to the point where UMC-grade domestic travel or most tech gadgets become a rounding error where only the time and effort involved matter instead of the money being even a consideration.
As for surface level changes, I'd still go to work in the office but would be more open to job hopping (e.g. to a quant) since I wouldn't be as dependent on an income I already know to be stable. I'd probably move into a 2B instead of a 1B so I could separate my bed and computer, and if I was staying in a place where I still need a car (that is, not Manhattan) then I'd test drive a top Model S and let my immediate reaction decide whether to buy it, but other than that things would be fairly similar, except for upgrades here and there (eg staying in a suite if the standard room at the hotel I like is too small) and having a much lower threshold of desire needed to buy something to begin with.
Too hard to predict, but I would not be optimistic. People who receive windfalls of that size seem as a rule to fare much worse than people who are born into it, who in turn fare worse than people who acquire it through business or other ventures.
Already retired with a comfortable income so maybe a second home in New Zealand? Not sure if 5 million would be enough for that but it’s a fun idea. For sure I’d take the clunky Otterbox case off my cellphone.
I would retire immediately, otherwise keep my current lifestyle (at least in short term), and invest the remaining money.
In the free time I would start working on my projects, and I would meet my friends more often. Probably would live a bit healthier, having more time for exercise and cooking.
I might keep devoting some time to my job part time, but mostly I'd just pursue my own interests without worrying about money.
Short-term, not too much. Long term, probably a lot.
Like, I think I'd do the "invest, retire, 4% annual withdraw=$200k" but...
I suspect a lot would change if I became completely location independent but I think the biggest thing would be that combined with experimenting with money.
Like, a lot of us, even if we make good money, aren't in a position to spend $200k. Even if you make $200k, you're not spending $200k but there are spending options @ $200k, especially outside of NY and SF, that are...really interesting.
Like, I don't think a personal trainer at the gym is necessary but...I'd probably be in better shape and I'd definitely be doing more stretching and be safer. I don't "need" a nutritionist but...I'm really curious if one would make a difference.
I hate wearing suits and ties but I have noticed, as you spend more money, they get a lot comfier and...people do always treat people in suits better.
I guess the thing is that, using health as an example, my workout routine and diet are probably, like, at a 7/10 but if I had a $5000/year budget for personal trainers and nutritionists and I started buying everything from Whole Foods I'd get to an 8/10 or 9/10.
But it's also not just money and having spending, I could technically do some of this stuff now, but there's a certain cost in time and money to experimenting with things and finding out what's worth the money and what isn't, especially for me personally. Like, as I've gotten more money, I've found I prefer to pay a premium to live in places where I don't need a car, rather than buy a nicer car. Maybe that's different for you, all good, but...I think I'd spend a decent chunk of time trying to find ways to convert money into happiness; it seems like there's a certain amount of knowledge and experience to that which I don't have.
Five's a nightmare.
Seriously though, with an extra five million I'd have two choices -- either upgrade my lifestyle moderately and retire now, or upgrade my lifestyle significantly and keep working. Since I don't currently have any particular plans for what I'd like to do in early retirement, I'd probably keep working until I thought of a better plan.
What kind of lifestyle upgrades? Fancy cars, major house renovations, more expensive holidays and general better quality everything-I-own. I like my house and I probably wouldn't bother getting a different one, but I'd spend half a million renovating all the things I don't like about it.
There's a good chance I'd retire early from my current job and pursue some private projects instead, but I'm not sure about. I'd definitely be moving someplace nicer, and pursuing some major lifestyle enhancements.
I'd buy the house I liked, hire household help, help in-laws with certain healthcare expenses, fund various tax advantage accounts to the maximum, and take a trip somewhere nice.
Depends how many people know about it. I'm terminally unambitious, and I enjoy my job; I would shove the money in the bank, stay in my cheap-ish apartment and keep working. But I might be hassled for money by neighbors if they knew.
A market on Manifold has been arguing about John Leslie's Shooting Room paradox. The market can't resolve until a consensus is reached or an independent mathematician weighs in. Does anyone here have any advice? https://manifold.markets/dreev/is-the-probability-of-dying-in-the
Hi, independent mathematician here, although I don't seem to be able to post there.
This isn't a well-posed question. It falls into a problem that a lot of attempts to formulate probabilistic paradoxes do, which is presupposing the ability to sample uniformly from the integers. But that doesn't work - there simply isn't a probability distribution with that property.
If the probability of the snake biting was more than 1/2, we could still say something meaningful by removing “you” from the picture, and computing the expected number of people who get rich and the expected number of people who get bitten.
But in round n we expect 2^n (35/36)^n people to get rich and 2^n(35/36)^(n-1) 1/36 to get bitten. And both those sums (in fact, both those terms) tend to infinity.
We /can/ say that the expected number of people who get rich in round n is 35 times the expected number of people who get bitten in round n, just as we'd expect.
But “if you are one of those infinitely many people, chosen uniformly at random...” simply isn't a meaningful start to a question.
"Importantly, in the finite version it's possible for no one to die. But the probability of that approaches zero as the size of the pool approaches infinity."
The probability of being chosen goes to zero here as well
Point 5 in the FAQ says "Importantly, in the finite version it's possible for no one to die. But the probability of that approaches zero as the size of the pool approaches infinity."
But this is irrelevant. No matter how big the finite pool of people is, the probability that nobody dies *conditional on you being chosen to play* does not approach zero. (This is because if you are chosen to play it is probably because the game is only a few rounds short of exhausting all potential players and ending without death. To understand the difference, it may help to imagine a city where 99% of the buses have 1 passenger and the rest are full with 100 passengers. The probability of a bus being full is 0.01, but the probability of a bus being full *conditional on you being on board* is about 0.5.)
Your probability of dying, given that you get to play, is only 1/36, no matter how large the finite pool is.
(And in the case of an infinite pool of players, the question doesn't make sense, as the premise "choosing each group happens uniformly randomly" is impossible.)
Not a mathematician, but that's mostly coming down to word choice. What does it mean to be "chosen" to play? Are you chosen when you show up, or when you roll the dice?
If you're chosen by showing up and your grouping is random, then it's going to be some weird thing where your odds of dying are *members of group*/36, minus the odds of a previous group rolling snakeeyes. So, like, previous round times 2, times 35/36 each round. Or something. And does that already cover groups past yours? I'm not a mathematician.
If you're instead chosen when your group has the roll, it's 1/36, full stop.
...do I have that backward? I think the group size actually counts in the player's favor; before the first roll, your odds of dying decrease in each successive group, and with double participants in successive groups your chance of getting selected for the later groups is much higher, so your odds of dying before the first roll should be significantly lower than 1/36.
Canada's wildfires have broken the annual record [since good national records exist which seems to be from 1980] for total area burned, and they're just now reaching the _halfway_ mark of the normal wildfire season.
https://www.reuters.com/sustainability/canadian-wildfire-emissions-reach-record-high-2023-2023-06-27/
Meanwhile the weather patterns shifted overnight and Chicago is now having the sort of haze and smell that the Northeast was getting a couple weeks ago:
https://chicago.suntimes.com/2023/6/27/23775335/chicago-air-quality-canadian-wildfires-worlds-worst
Did my usual 1.5 mile walk to the office this morning, from the South Loop into the Loop. The haze is the worst I can remember here since I was a kid which was before the U.S. Clean Air Act, and the smell is that particular one that a forest fire makes. (Hadn't yet seen the day's news and was walking along wondering which big old wood-frame warehouse or something was on fire.)
Southern Michigan here. The haze today felt oppressive and somehow demonic. Outdoors smells like a house burning down. Spent about two hours outdoors this evening for a good reason, but now I have a sore throat. I hate it.
Yea the sore throat has been constant for me since Tuesday, and is irritating both literally and mentally.
My adult son, who lives now in a different part of Chicago, woke up Tuesday morning with a splitting headache and thought it was an allergies thing until he saw the morning news reports. The June weather here has been quite pleasant and we've all been happily (until this week) sleeping with lots of windows open.
Of course folks with specific conditions like asthma and/or who are elderly have it a lot worse. My eldest sibling is 69 now and has had various respiratory issues for years, lives on the city's South Side, and he's just had to be a recluse this week.
It's been breezy here and the winds are supposed to swing around to be from the south tonight/tomorrow which should push a lot of the haze away (hello Wisconsin please enjoy this gift from us). But then over the weekend the predictions are for a shift back to winds being out of the north. And Canada seems to be having having little progress in getting fires under control. So, rinse and repeat I guess.
Yeah, the smell is wood smoke.
There's enough here that we're supposed to stay inside, and I got a bit of a sore throat, but the smell is pleasant.
My sense overall is that the book review contest entries are better this year than last year- do people generally agree?
If the reviews are mostly coming from the same crowd, we should probably expect them to improve every year as the crowd gets more experience.
I've counted two as quite good and the others as okay. I don't remember last year, may not have been here for it.
no
Maybe slightly, I thought last years were pretty decent except for a few "meh" ones, maybe we just haven't gotten to the "meh" ones yet?
Pretty good so far.
Was it anti-Russian or just anti-war-of-conquest? Wasn't the writer themself Russian?
I have been trying to track down a specific detail for a while with no luck. The first Polish language encyclopaedia, Nowe Ateny, has this comment on dragons that is among its quotable lines (including on the Wikipedia page!): "Defeating the dragon is hard, but you have to try." This is very charming and I can see why it's a popular quote, and I'm interested in finding the original quote within the text, but searching the Polish word for dragon (smok, assuming it wasn't different in the 18th century) hasn't revealed anything. Would anyone be able to find the sentence and page that it appeared on?
I tried for a while to use ChatGPT for this, thinking that it's the sort of "advanced search engine" task it would be good at, but the results I got were abysmal.
Thank you to Deiseach, Faza, and Hoopdawg! I had a feeling that it wasn't so clear cut, and this is exactly the sort of detailed breakdown that I was hoping someone could do for me. I appreciate you taking the time to use your research skills like this.
You're welcome, this is the kind of fun, nothing of huge importance riding on it and more interesting than the work I should be doing right now stuff I enjoy 😁
TL;DR - the quote is probably spurious. See my discussion with Deiseach: https://astralcodexten.substack.com/p/open-thread-282/comment/17801803
Do you have the book? Do you have other evidence that the quote is not made up?
The quote was added to Wikipedia here https://pl.wikipedia.org/w/index.php?title=Benedykt_Chmielowski&diff=prev&oldid=945987 perhaps you could ask the editor.
Looking at that, there is a Latin superscription on the drawing, and I think that the 'translation' is probably a joke by someone:
Latin is "draco helveticus bipes et alatus" which translates to "bipedal and winged Swiss dragon"
I think "the dragon is hard to beat but you have to try" is someone making a joke translation of the Latin text. (EDIT EDIT: I was wrong, see below)
EDIT: On the other hand, this guy is giving "quirky quotes" from the book and he translates it that way, but with a different illustration to the one in the Wikipedia article:
https://culture.pl/en/article/10-quirky-quotes-from-polands-first-encyclopaedia
EDIT EDIT: And we have a winner! Copy of the text here, with illustrations, and from the section of illustrations, the one titled "How to beat a dragon" has that very text and translation!
SMOKA POKONAĆ TRUDNO,
ALE STARAĆ SIĘ TRZEBA
THE DRAGON IS HARD TO BEAT,
BUT YOU NEED TO TRY
https://literat.ug.edu.pl/ateny/0050.htm
Damn, beat me to it.
It does appear, however, that the quote is - in fact - spurious. It doesn't appear in the scan of the 1745 edition (the section on dragons begins on p. 498, here: https://polona.pl/item-view/0d22aab6-4230-4061-a43e-7d71893ad2bc?page=257), nor - for that matter - in the transcribed text of the encyclopedia on the page you linked (the dragon falls, quite sensibly, under reptiles).
The illustrations aren't part of Chmielowski's encyclopedia - as can be readily checked by looking at the scan - but rather come from Athanasius Kircher's "Mundus Subterraneus" - https://en.wikipedia.org/wiki/Mundus_Subterraneus_(book).
Lord only knows who came up with the accompanying text for that particular illustration, but I suspect the editor of the linked online edition.
The plot thickens! So the illustrations *aren't* part of the work, and somebody was being naughty?
It does look like "someone said it on the Internet and that got repeated as fact" once more.
Though I suppose we can plume ourselves on being (for the moment) better fact-checkers than AI 😁
Actually, it's even more complicated.
I've noticed that the online transcription that you linked differs significantly from the scanned 1745 edition - to the point that it contains entire paragraphs that cannot be found in the 1745 printing.
Notes to the online text (https://literat.ug.edu.pl/ateny/0100.htm) state that it is based on a 1968 selection and edition by M. and J. Lipscy. Therefore it is possible that the quote was introduced in this prior edition, together with the illustrations. Chmielowski certainly uses Kircher as one of his sources when writing on dragons, so it's not entirely baseless, but that still doesn't answer the question of where the quoted sentence came from.
Unfortunately, I'm not likely to be able to lay my hands on the Lipscy edition, so it will probably remain a mystery.
ETA:
All told, my trust in the online transcription is pretty low, given that it describes itself as: "erected on the internet for the memory of the wise, the education of idiots, the practice of politicians, and entertainment of melancholics". I most certainly get the joke, but the fact it *is* a joke makes me suspect that the entire enterprise isn't too serious about itself, academic setting notwithstanding.
You've beaten me to... basically everything, so, spared from being a downer, all I have left to point out is that Nowe Ateny had two editions (1745 and 1754), of which only the first appears to be available online. So while the 1968 text cannot be considered a valid source or proof, it's still possible that the quote did, in fact, appear in the original.
Psychedelics affect neuroplasticity and sociability in mice... Maybe I should dose my cat (The Warrior Princess) with MDMA to make her more sociable with the neighborhood cats. She does love to brawl!
https://www.nature.com/articles/d41586-023-01920-2
https://www.nature.com/articles/s41586-023-06204-3
My fur buddies - Moose and Squirrel - just graduated to adult cat food on their first birthday. They _really_ love to wrestle with each other. When they are upstairs and I’m downstairs it sounds like a couple of full sized humans going at it.
I’ve come up with a couple of distractions to keep them apart though. My profile photo shows Moose enjoying one of his favorite videos.
Saw a great cartoon one time of a fat disgruntled cat whose owner had presented him with some cutesy cat toy. Cat's thinking "Look at this lame toy! I just want to be out fucking and fighting with my friends."
Dunno if this would interest Warrior Princess, but one of my young Devon Rexes really *loves* puzzles. Got some here: https://myintelligentpets.com/products/mice. He likes the 3x3 sudoku and is rapidly getting expert at it -- whips through it very efficiently these days. Am pretty sure Mice and Pets O'Clock will work well for him too. Some of the others have design problems. Also make him little puzzles using a kid's plexiglass marble run set -- he has to tip the tube to make the treat fall out. The other cat hates treats so I can't use these puzzles on him, but he likes toy challenges, where I build a little thing with toys stuffed inside or under it and he has to work to get at them.
> He likes the 3x3 sudoku and is rapidly getting expert at it
Note for others: it does not mean what you probably think.
https://myintelligentpets.com/search?q=sudoku
Well, he's working up to regular sudoku. Heh.
I hope shameless self-promotion isn't forbidden here, but I thought some in this community in particular might enjoy my near-future sf story "Excerpts from after-action interviews regarding the incident at Penn Station," published last week in Nature. (947 words)
https://www.nature.com/articles/d41586-023-01991-1
Leading the Harvard professor accused of fabricating data based on Independent data analysis, in research papers covering the topic of honesty.
https://www.google.com/amp/s/www.psychologytoday.com/intl/blog/how-do-you-know/202306/dishonest-research-on-honesty%3famp
https://www.businessinsider.com/real-estate-agents-lawsuits-buy-sell-homes-forever-housing-market-2023-6
Shared for "Nicholas Economides, a professor of economics at New York University..."
I've been writing a novel on AI and sharing weekly. The (tentative) blurb is "Why would a perfectly good, all-knowing, and all-powerful God allow evil and suffering to exist in the world? Why indeed?" I just posted Chapter 5 (0 indexed), hope it's of interest!
https://open.substack.com/pub/whogetswhatgetswhy/p/heaven-20-chapter-4-gnashing-of-teeth?r=1z8jyn&utm_campaign=post&utm_medium=web
Ha. The program constantly glitching out and killing people brings up that it's glitching out and killing people, and the programmer doesn't consider that it might be the same glitch.
I'm not a huge fan of these kinds of little event skips, but otherwise these have all been fun.
Thanks! The idea was that the “EmilyBot” that brought up the killings was trained on the real Emily’s communications, implying that the real Emily knows about them, and likely will report on them. Is that what you’re referring to or did I misunderstand you?
That's the one. Just like the Marilyn sim was based on available Marilyn data, and then glitched out and killed the user repeatedly. Found it funny that even after the 'main' program tells him the line is based on nothing, he still doesn't consider it might just be glitchy. Hook, line and sinker, that guy.
Thanks! I have a lot of the plot beats and the ending planned out; the in between bits are a bit more freestyle.
Nothing about the "official" and public story about the Day of Wagner makes sense.
That story, roughly: after weeks of verbal escalations, Prigozhin declares open revolt around 24 JUN 0100 (all times Moscow time). At 5AM, the troops enter into Rostov-on-Don, and "take control" of the city (or one military facility) without resistance.
The troops then start a 650 mile journey from Rostov-on-Don to Moscow. The goal? Presumably, a decapitation strike against Putin. Except, rumor has it that Putin wisely flew to "an undisclosed location".
The Russian military set up blockades on the highway at the Oka river (about 70 miles south of downtown Moscow), and basically dared Prigozhin to do his worst.
In response, Prigozhin ... surrendered completely before midnight, accepting exile in Belarus. The various Wagner troops are presumably going to follow the pre-announced plan of being rolled into the regular Russian army on July 1.
... while I can't rule out that there was an actual back-room coup attempt, it seems more likely that this was a routine military convoy that was dramatized in the Russian media, and then re-dramatized by the Western media as something that was not choreographed ahead of time.
Perhaps Prigozhin just realized that the real coup was the friends we made along the way.
I think it makes sense if Prigozhin saw himself going the way of Röhm, absent something drastic.
The situation has clearly been unstable for a while now, and the side that moves first gets the advantage.
So he launches a mutiny, not a coup. A show of force in order to improve his position within the system, rather than an attempt to take over. He knows Wagner can't sucessfully march on the Kremlin so he quickly negotiates an off-ramp via Lukasheno.
He and his allies are alive and free, Wagner no longer exists (not that it officially did in 2021) and a minimal amount of blood has been spilled. It could have gone a lot worse for everybody involved.
I have a much longer reply to the comments here, now published at https://www.newslettr.com/p/the-convoy-theory .
Can you go into more detail on why you think Putin/Prigozhin et al. would have staged a fake insurrection? I agree that the details of the situation seem strange and confusing, and I wouldn't necessarily trust the official story coming from any individual actor, but it makes less sense to me that Putin would have deliberately staged a fake insurrection.
Putin's swing from promising to punish the traitors to giving Prigozhin a cushy retirement makes him look weak as far as I can tell, and will embolden the Ukraine and their NATO supporters while disheartening Russian soldiers and civilians. I appreciate that you may not see things that way, but what exactly does he gain from this that would be worth it?
I can't, because I don't *know* why. But this would not be the first time Putin has resorted to needless theatrics.
My top three guesses are "because Putin wants a surprise attack by Wagner against northern Ukraine to be a surprise", "because he wants to humiliate the West", and "because he actually has brain damage". But all of those are speculation I would prefer not to publish in that context.
Thus is a serious question: how is your Russian comprehension/fluency? You seem not to site any Russian sources, and your phrasing “Russian press” or “Russian media” is very… American/Western. Can you name three Russian press outlets that you think fit your description and are useful in the context of this discussion?
I know that Я is the last letter of the Russian alphabet, but I'm not confident about the order of the other letters.
I am explicitly not going into detail on certain points where the Russian press is more accurate than the Western press. For example, the idea that Wagner occupied all of Rostov-on-Don isn't anywhere in the Russian press, but a lot of Westerners have run with it.
But for the statements of Putin/Prigozhin/etc. the translations into English are fine.
Ok then. All hot air.
I appreciate that. I guess that I, and it looks like most others here, aren't going to place much credence in a theory that involves a large conspiracy without a clear unifying motive.
I concede that if this is a smokescreen to cover repositioning to enable a surprise attack in northern Ukraine, I will be suitably impressed. I guess we will find out soon enough, though that does seem like the kind of 3D chess move that rarely plays out in real war. Definitely not impossible, though.
Russian here.
You are dramatically overestimating the competence of my government to organise such a prank on purpose. Frankly, I'm surprised that after all the clusterfuck of the Ukrainian war people keep mistaking in this direction.
The null hypothesis is that it looked like a mess because it was a mess. There were different plans by multiple parties and they didn't go as expected so everyone just defaulted to a completely unsatisfying compromise.
" I'm surprised that after all the clusterfuck of the Ukrainian war people keep mistaking in this direction."
May I humbly offer an explanation? Here: "The culture one knows nothing about has all the answers". Especially when it has a weird alphabet with too many characters. See also, e.g., "tiger mother".
On a more serious note, it is impossible to understand a culture without being proficient in its language. I mean, citing CNN (!) as any sort of an authority on Russian affairs...
There is much we don't know about Prigozhin's coup attempt, like what was the actual goal and why he blinked. But the evidence that this was a coup attempt (broadly speaking) and not a bit of spin on an ordinary troop rotation, is overwhelming. To believe that, means believing that both Putin and Prigozhin publicly acted out a role that makes both of them look weaker than if nothing had happened. Means believing that the Russian government, the Ukrainian government, the US government, probably several Western European governments, the western OSINT community, and the Russian milblogger community, all publicly endorsed a lie that each of them knew that any of the others could disprove. For what, to entertain the rubes on TV for a day? Distract them from Hunter Biden?
There's no plausible motive to unite everyone who would have to have been united for that version to have played out. It's a straight-up conspiracy theory, of the sort that the general argument is properly against. And not understanding a thing, is not evidence that the thing is a hoax.
So what were the routine convoy routinely doing hundreds of miles from where they were supposed to be fighting?
They were relocating to Belarus.
The convoy was announced to have stopped near Yelets <https://www.nytimes.com/2023/06/24/world/europe/wagner-moscow-russia-map.html> , which is where Google Maps says to exit the M4 and head west when driving from Rostov-on-Don to Minsk.
Nothing about this comment makes sense. Can't fully rule out that you might actually believe what you're saying, but it seems more likely that you are Scott Alexander, posting controversial content under an alias to increase engagement.
Three fourths of the original post is the author noticing he is confused by the publicly known facts. You and Melvin poking fun at them is doing a disservice to rationality project.
For the record, I don't think the "dramatized routine military convoy" theory holds any truth.
Light disagree. While noticing your confusion is indeed a virtue, it's also important to notice when your alternative theory is producing even more confusion. Maybe poking fun isn't the best strategy here but it's not entirely unappropriate.
The core point is that approaching everything with such level of motivated scepticism is unsustainable and self defeating.
It seems more parsimonious to assume that Russia doesn't exist at all.
This entirely ignores Prigozhin's public statements and the charges opened against him? It also misunderstands how the Russian media works.
Now, admittedly, this is a very confusing situation, but that's because...well, we don't know what was agreed to, or if anyone is actually planning to live up to those agreements, or what threats/promises were actually made.
I'm all for acknowledging the fog of war and the propaganda machine and not rushing to judgment, and the appropriate way to deal with those things is to apply a reasonable discount to the value of evidence you obtain based on source and circumstance.
But one should also be cautious not to *over* discount evidence. Otherwise you can end up surrounded by imperfect-but-still-useful information, irrationally discount it all to zero because "it's all just a product of the spin machine," and just sit irrationally ignorant in a sea of useful information.
And I think trying to explain Wagner's activities on July 24 as a "routine military convoy that was dramatized in the Russian media" is very much applying too much discount to too much evidence pointing to the simple explanation that what both Russian and Western media has portrayed as an act of armed insubordination was just that.
Just for starters, I don't think Vladimir Putin would have personally referred to a "routine military convoy that was dramatized in the Russian media" as treason.
https://www.theguardian.com/world/video/2023/jun/24/russia-putin-accuses-wagner-boss-of-treason-in-national-address-video
Nor would the government of Belarus have officially announced updates on its work negotiating terms between the Kremlin and a routine convoy.
https://president.gov.by/ru/events/soobshchenie-press-sluzhby-prezidenta-respubliki-belarus
The dead pilots JW mentioned are documented in plenty of places. Just a few examples: (1) https://youtu.be/u8tyn9Xr-68?t=399, (2) https://www.businessinsider.com/wagner-boss-yevgeny-prigozhin-breaks-his-silence-after-aborted-mutiny-2023-6 (also quotes Prigozhin himself as expressing "regret" for having shot down Russian military aircraft and includes a link to his audio message if you have telegram and speak Russian).
A skeptic can point to any one of these kinds of information nuggets and rightly say that they aren't perfect, but they pile up and at some point it's more foolish to discount them all than it is to believe them.
CNN is already running opinion pieces that answer your point about Belarussian involvement: "Belarus leader Lukashenko’s purported mediation in Kremlin crisis stretches credibility to the limit" - https://www.cnn.com/2023/06/25/europe/putin-belarus-lukashenko-analysis-intl/index.html
Aside from the simple fact that it is merely an opinion piece, and not suited to disproving matters of fact, the opinion piece you reference merely observes that the circumstances are strange. It doesn't even dispute that the intercession itself happened - it simply observes that "Lukashenko’s apparent intercession raises more questions than it answers" because "Lukashenko is clearly seen as the junior partner in the relationship with Putin," "[d]elegating Lukashenko to resolve the crisis further damages Putin’s image as a decisive man of action," etc.
But one can't simply take a single observation that a thing is unusual and treat it as evidence that "answers [someone else's] point" that the thing more likely happened than not in light of the large amount of diverse reporting indicating that it did.
You can play evidence whackamole here finding reasons that my evidence is imperfect, JW's evidence is imperfect, beleester's evidence is imperfect, etc.
But even owning that the evidence is imperfect, you're not addressing the *volume* of it all pointing in the same direction. Which is irrational to do, and leading you to talk yourself into discounting a very likely explanation for something in favor of an extraordinarily unlikely one. That path leads, much more often than not, to being wrong, and I'm very confident that you are in this case.
[follow on note - even your opinion piece itself describes Wagner's activities as follows: "A quick recap: A major crisis shook the foundations of the Russian state Saturday, as forces loyal to Wagner mercenary boss Yevgeny Prigozhin marched toward Moscow. Then, an abrupt reversal happened — Prigozhin called off their advance, claiming his mercenaries had come within 124 miles of the capital but were turning around to avoid spilling Russian blood." When even your own evidence is describing it as a major crisis "march on Moscow," I really don't see anything but motivated reasoning to support the proposition that this was all somehow a big misunderstanding about "a routine military convoy that was dramatized in the Russian media"]
That strikes me as less "we don't believe it happened at all" and more "we don't believe it happened the way they say it happened."
Like, it seems reasonable to question the idea that Putin and Prigozhin just hugged it out and went back to work immediately after hurling accusations of treason at each other. But it seems even more doubtful that Putin, Lukashenko, and Prigozhin all got together and agreed to say that Lukashenko averted a near-mutiny for no apparent reason.
Having Wagner stationed In Belarus could be quite useful to Lukashenko.
Why would the Russian media make Putin look weak and vulnerable by inventing a coup when none existed? Very likely, Prigozhin expected the generals of the official army to join him after he declared his rebellion. When that didn't happen, he knew he was done for. Exile was the best he could hope for, and that's essentially what he got.
Making the Western media look stupid is a sufficient reward for Putin.
But ... the press has been discussing for weeks the ongoing power struggle between Prigozhin (who is [or was] formally independent from the Russian military hierarchy) and the Russian military hierarchy. That had to be resolved somehow. It seems the resolution is sending Prigozhin to Belarus.
Everything else is still theory.
For a dictator, looking weak is life-threatening. Making Western media look stupid is absolutely not worth that. (And anyway, if the media making a prediction about breaking news that turns out wrong is "looking stupid", then they look stupid every day.)
And he does look weak. He publicly declared Prigozhin to be a traitor who must be destroyed, and prepared Moscow for a military attack from Prigozhin. After that, doing anything short of blowing up Prigozhin and Wagner makes him look weak. Instead he publicly makes a deal forgiving them.
My best guess for what happened is that Putin ordered the military to blow up Wagner, and the brass quietly refused, and then the brass quietly told both Putin and Prigozhin how it was going to be.
> My best guess for what happened is that Putin ordered the military to blow up Wagner, and the brass quietly refused, and then the brass quietly told both Putin and Prigozhin how it was going to be.
That theory would neatly explain why both Putin and Prigozhin would go along with a compromise which makes Putin look weak and would normally leave Prigozhin in a position where his odds of fatally falling out of a window are probably 20% per day.
The problem with the theory is that the brass would have to be very sure of their position if they are willing to disobey Putin without getting rid of him, as I assume that he would not take that well. Refusing to fight a traitor generally makes you a traitor, so as long as you end up on Putins shit list anyhow you might at least try for the less risky outcome.
The military disobeying Putin would de facto be a coup with the additional handicap of keeping Putin as a figurehead. While one might stage a coup with 70% of the armed forces on ones side, one would basically need to have 99% of the forces on ones side to try for the figurehead maneuver. I do not think Putin has organized his security apparatus so incompetently that this is likely.
> The military disobeying Putin would de facto be a coup with the additional handicap of keeping Putin as a figurehead.
Is it possible that the actual power was already transferred from Putin to someone else some time ago? (As a mutual agreement -- old man Putin is allowed to spend the rest of his days in peace, providing legitimacy to the new ruler's first months in power.) And Prigozhin simply wasn't in the loop, because he was away from Moscow.
The person who has actual power is the person who others (especially soldiers) think have power, and will thus obey. What would a secret transfer of power even mean?
Everyone keeps saying "this makes Putin look weak". (My first draft about this included that line as well.) But does it?
The logic is different if this was real or kayfabe, but the outline is the same: Prigozhin made a lot of threats, Putin said "Go ahead and hit me", and Prigozhin immediately surrendered in response. I'm sure the Russian media will say that this shows how strong Putin is: he defeated a coup without firing a shot!
But they shot down some helicopters and killed people.
And crossed a border and took a coty without resistance.
Russian state media can spin in however they want but russians at large still have internet access.
Tanks in the street of moscow and roadblocks? 1991 wasn't that long ago. Russians know what these things mean.
For any autocrat ancient or modern, coups are a danger. Discouraging your generals from trying any coups is very important.
One central promise which keep your generals in line is that anyone who tries a coup and fails will end up with their head on a spike, possibly with their friends and family next to them. If you don't follow through on that, it establishes a terrible precedent.
Even long-established democracies would totally throw the book at a military official who tried their hand at regime change.
"Come at the king, best not miss". I agree that the other theory that makes sense is "Prigozhin is already dead and they are waiting to announce it until they can blame it on Ukraine".
You think it's likely that Wagner shooting down 3 out of Russia's total supply of ~20 Mi-8MTPR-1 EW helicopters, among other air assets, leading to at least 13 airmen deaths, was part of a routine military convoy that was choreographed ahead of time?
{{evidence needed}} - not just a "Business Insider says the Kyiv Independent says Ukrainian officials say it happened" citation, but actual evidence that includes details such as when planes were shot down, and in what oblast.
Without a citation, I don't know if I should consider this as either "fake news", "a successful attempt to detect traitors in the Russian air force", or "actual evidence against my theory".
Oryx has links to pictures and Russian sources of multiple helicopters and ground vehicles that were destroyed. I'm not enough of a Geoguessr expert to personally confirm they happened in Voronezh, but at the very least it's suspicious that these new photos all came out while the convoy was on the move:
https://www.oryxspioenkop.com/2023/06/chefs-special-documenting-equipment.html
Also, video of roads being torn up on the way to Moscow:
https://www.businessinsider.com/video-shows-crews-tearing-up-highways-block-wagner-moscow-advance-2023-6
This strikes me as the most compelling - you can *maybe* argue that the helicopter shootdowns were some sort of incredibly tragic miscommunication, but tearing up the roads in front of the convoy pretty firmly demonstrates that you don't want the convoy to go to Moscow.
Also "an attempt to detect traitors in the Russian air force" makes no sense. If you suspect someone of treason, putting them at the controls of a loaded attack helicopter is the last thing you'd want to do. What next, will Putin detect traitors in his staff by handing each of them a handgun and seeing who takes a shot at him?
Russia has been in a hot war in Ukraine for the past 16 months, and almost every day since then there have been reports of Russian materiel being destroyed.
Yes. In Ukraine. And very occasionally in Russia, but then always either close to the border or involving very fixed targets. Ukraine does not have any weapons that could plausibly target a helicopter flying at low altitude over Voronezh, from any territory Ukraine currently controls.
What exactly are you claiming here? That the Ukrainian air defense forces just coincidentally had their most successful day ever on the same day Russia decided to move a huge column of military hardware from Rostov to Moscow, which coincidentally also happened on the same day they did some major road work on the roads to Moscow?
EDIT: And also, the Ukrainians didn't claim this success as their own, they decided to claim it happened in Voronezh for the lulz?
Ukraine has all sorts of restrictions on their use of foreign weaponry, and the most important one is "don't use NATO materiel to attack targets in the Russian Federation".
They *have* to lie about it if that is what happened.
Geolocated videos and photos of the the incident that caused the most deaths here: https://twitter.com/Osinttechnical/status/1673052618656997377
That thread gives latlon 49.649689, 39.846627 . Which is very close to the Ukrainian border, but not particularly close to the M4 motorway which the Wagner convoy was on.
I think that is just the ongoing War in Ukraine. If there was some intra-Russian friendly fire here, it had nothing to do with either Prigozhin's political ploys or the convoy to Moscow.
Yeah, close the 2014 Ukrainian border, but not anywhere near Ukrainian controlled areas - something like 20 miles from the M4 highway, but eyeballing it like 75 miles from the front lines.
Seems like you're not interested in changing your mind.
Edit: Also, how close is the wreck supposed to be to the highway, if it was travelling at several hundred miles per hour when it was shot down?
Posted by the Associated Press an hour ago:
"The leader of the Wagner mercenary group defended his short-lived insurrection in a boastful audio statement Monday, but uncertainty still swirled about his fate, as well as that of senior Russian military leaders, the impact on the war in Ukraine, and even the political future of President Vladimir Putin.
Russian Defense Minister Sergei Shoigu made his first public appearance since the uprising that demanded his ouster, in a video aimed at projecting a sense of order after the country’s most serious political crisis in decades.
In an 11-minute audio statement, Yevgeny Prigozhin said he acted “to prevent the destruction of the Wagner private military company” and in response to an attack on a Wagner camp that killed some 30 fighters.
“We started our march because of an injustice,” Prigozhin said in a recording that gave no details about where he is or what his future plans are.
A feud between the Wagner Group leader and Russia’s military brass that has festered throughout the war erupted into a mutiny that saw the mercenaries leave Ukraine to seize a military headquarters in a southern Russian city and roll seemingly unopposed for hundreds of miles toward Moscow, before turning around after less than 24 hours on Saturday.
The Kremlin said it had made a deal for Prigozhin to move to Belarus and receive amnesty, along with his soldiers. There was no confirmation of his whereabouts Monday, although a popular Russian news channel on Telegram reported he was at a hotel in the Belarusian capital, Minsk.
In his statement, Prigozhin taunted Russia’s military, calling his march a “master class” on how it should have carried out the February 2022 invasion of Ukraine. He also mocked the Russian military for failing to protect the country, pointing out security breaches that allowed Wagner to march 780 kilometers (500 miles) without facing resistance and block all military units on its way.
The bullish statement made no clearer what would ultimately happen to Prigozhin and his forces under the deal purportedly brokered by Belarusian President Alexander Lukashenko...."
[Addendum: I also just read Reuters' article about the audio recording, posted 20 minutes ago, it's basically the same as the AP's.]
I'm gong to be searching for a new job soon. I've seen lots of posts about LLMs helping people with resumes and cover letters etc. so I have a few questions:
1. Is this actually something that GPT is good enough at that if you are someone who is mediocre to average at resume/cover letter writing that it will meaningfully help?
2. is GPT-4 enough better on this kind of task than 3.5 to be worth paying for?
3. Is there some other tool or service (either human or AI) that is enough better than ChatGPT that is worth paying for and would obviate the need of paying for GPT 4 for this purpose?
FWIW, I've heard that Bing in creative mode uses GPT-4.
In general, you should try it. It's hard to say whether it's "good enough" depending on the person, the prompts, and a bunch of other variables, but spending more time revising your resume will probably make it better, and if using an AI helps you to spend more time you end up with the same result.
I think this probably depends significantly on which field and level you're writing the resume for. What I would look for in hiring entry-level software devs is going to be different than what someone hiring for something else would look for (and tbh, is probably different than what my manager is looking at in selecting candidates that I would see). It also depends on your level of (relevant) work experience.
The raw information is probably more important than the presentation of it unless you're leaning hard into the florid side on both questions, and I feel like it's hard for ChatGPT to fuck that up.
(As for 3.5 vs 4: if you can afford to toss out $20, GPT4 is fun enough to play with that you might want to try it anyway. It *is* measurably better at most things, but probably not enough to be a dealbreaker if that $20 is needed elsewhere)
So Ecco the Dolphin wasn't based on Lilly or his theories of LSD/Dolphins/Sensory deprivation...
But it was based on a a movie that was inspired by Lilly and the creator likes the adjacent theory of Dolphins/Sensory Deprivation, but not the LSD portion? And the Dolphin is coincidentally named after one of Lilly's coincidental theories, but the author assures us that is pure coincidence.
Uh huh...
Doesn't seem like much of a correction.
I had the same thought.
Is there a surgeon in the house?
Surgeons have a reputation for working really punishing hours, up there with biglaw associates and postdocs. I'm trying to understand why. Is it just the residencies that are punishing, or do the long hours extend into post-residency careers? And what's driving people to keep going?
I'm not a surgeon, but I remember reading a comment from a surgeon once addressing the question of why surgical training is so grueling (maybe it was even a comment on one of Scott's blogs lol!)
The answer was basically, surgeons frequently have to perform at a high level under conditions of extreme stress and fatigue; the only way to become good at that is to get lots of practice performing at a high level under conditions of extreme stress and fatigue.
Here's the maximum duty hours section of the collective bargaining agreement for medical residents in the province of Ontario, Canada.
https://myparo.ca/your-contract/#maximum-duty-hours
(b) No hospital department, division or service shall schedule residents for in-hospital call more than seven (7) nights in twenty-eight (28), including two (2) weekend days in eight (8) weekend days over that twenty-eight (28) day period. A weekend day is defined as a Saturday or a Sunday.
...
In those services/departments where a resident is required to do in-hospital shift work (e.g. emergency department, intensive care), the guidelines for determining Maximum Duty Hours of work will be a sixty (60) hour week or five (5) shifts of twelve (12) hours each. Housestaff working in these departments will receive at least two (2) complete weekends off per month and (except where the resident arranges or PARO agrees otherwise) shall between shifts be free of all scheduled clinical activities for a period of at least twelve hours. All scheduled activities, including shift work and educational rounds/seminars, will contribute towards calculating Maximum Duty Hours.
Things have chhanged, in Europe at least. My surgical internship in the nineties in germany was quite intense. My then-wife chose to meet the seamen's wives as she claimed she wouldn't see me more often than the other girls there did. We interns did this because it was the only way to become qualified in this field.
Since european regulations kicked in, everyone has to go home after a night on call and as a rule, working hours have to be documented and are limited to 48 a week.
Really, nobody wants a tired surgeon working on them.
It depends on the alternatives. Sometimes a tired surgeon is much better than no surgery at all.
Point taken.
As an ignorant uninformed outsider, surgery is one of those things that does require a lot of hours to do. First, you're putting in the time to learn how to cut people open, take bits out, sew them back up, and have them live after all that. You're watching, assisting, doing.
Then once you're a fully qualified butcher, it does take hours to cut people open, take bits out, and sew them back up. It really is one of the descendants of the traditional 'medicine as a guild' practice.
I think they get a lot of time off too. At least the surgeon I know does. But he does work VERY long hours sometimes.
Surgeries themselves sometimes take quite a long time. You might only be in surgery for a couple hours, but you also have to prep and debrief and do a bunch of paperwork. I would think if you "work task unit" took several hours, there would be a bias toward working fewer longer shift s in the name of efficiency. You can't just squeeze in one more surgery in 30 minutes at the end of a shift.
Some of it is probably golden handcuffs a bit too. When you get paid an obscene amount to do something, it can be hard to stop even if your work/life balance sucks.
I run into that with my work sometimes, where money will just fly out of my computer as long as I am willing to sit at it. Makes it tempting to put in long hours because more work means more pay in a way it does not at many other jobs.
Sticking with the theme of early hominins (and AGI), which I also posted about below, I'm wondering if new discoveries about Homo naledi don't complicate the evolutionary analogy often made by FOOMers, best expressed by the cartoon going around showing a fish crawling out of the water before a line of various creatures in different stages of evolution with modern man at the front of the line. Each creature thinks "Eat, survive, reproduce" except for the human who suddenly thinks "What's it all about?" https://twitter.com/jim_rutt/status/1672324035340902401
The idea is that AGI suddenly changes everything and that there was no intermediary species thinking "Eat, survive, reproduce, hmm... I sometimes wonder if there's more to this...." I.e., AGI comes all at once, fully formed. This notion, it seems, has been influential in convincing some that AI alignment must be solved long before AGI, because we won't observe gradations of AI that are near but not quite AGI (Feel free to correct me if I am totally wrong about that.)
Homo naledi complicates this picture because it was a relatively small-brained hominin still around 235,000 - 335,000 years ago which buried its dead, an activity usually assumed to be related to having abstract notions about life and death. It also apparently made cave paintings (although there is some controversy over this, since modern humans also existed around the same location in South Africa).
https://www.newscientist.com/article/2376824-homo-naledi-may-have-made-etchings-on-cave-walls-and-buried-its-dead/
I want to start a campaign against the concept of alignment, which I think is incoherent. Humans aren't even aligned, so how are we going to align an AI? I'd rather start focusing on Asimov-style rules against doing harm and coming up with reasonable heuristics for what harm actually means.
> I'd rather start focusing on Asimov-style rules against doing harm and coming up with reasonable heuristics for what harm actually means.
Part of the pro-alignment argument is that an AI would not follow the rules in the way we want without understanding our values. OTOH, understanding does not imply sharing.
First, I would argue that most humans are sort-of aligned. They might cheat on their taxes, but will generally be reluctant to murder children even if it would be to their advantage.
Furthermore, most humans are not in a position of unchallenged power, so social incentives (like criminal law) can go a long way to stop them from going full Chaotic Evil. A superintelligence without any worthy opponents would not be kept in check by externally imposed incentives.
I assume that making a randomly selected human god-emperor of the world will at the worst result in them wasting a good portion of the worlds GDP on their pet projects, hunting some species to extinction or genociding some peoples. Perhaps a bit of nuclear geoengineering. Perhaps one percent of human god-emperors would lead to human extinction.
By contrast, it is assumed that the odds of a randomly selected possible AI being compatible with continued human agency are roughly nil, simply because there are so many possible utility functions an AI could have. When EY talks about alignment, I think he is not worrying about getting the AIs preference for daylight saving times or a general highway speed limit (or whatever humans like to squabble over) exactly right, he is worried that by default, an AI's alignment will be totally alien compared to all human alignments.
Explicitly implementing rules with machine learning is seems to be hard. Consider ChatGPT. OpenAI did their best to make it not do offensive things like telling you how to cook meth or telling racist jokes. But because their actual LLM was just a bunch of giant inscrutable matrices, they could not directly implement this in the core. Instead, they did this in the fine-tuning step. This "toy alignment test" failed. Users soon figured out that ChatGPT would happily recite meth recipes if asked to wrap them in python code and so on.
Making sure an AI actually follows the three laws of robotics feels hard. (Of course, Asimov stories are full of incidents where the three laws lead to non-optimal outcomes).
I think we mostly agree. Humans are sort of aligned, and a random AI likely wouldn't be. However, we're not going to end up with random AIs, since we're evolving/designing them, so they will be much closer to human preferences than random ones. Unfortunately, an
Anyway, Asimov's laws probably aren't the right metaphor either. As you note, they're hard to implement and have unintended consequences, like any simple rule overlaid on a complex system.
Mainly, I was focusing on how sort of aligned along human lines seems inadequate for an AI, similar to how we wouldn't accept self-driving cars that have accidents at the same rate as humans. Alignment also seems hopelessly fuzzy compared to thinking about actual moral calculation, but maybe people who have thought about it more than me are clearer about it.
Greg:
>I want to start a campaign against the concept of alignment, which I think is incoherent. Humans aren't even aligned, so how are we going to align an AI? I'd rather start focusing on Asimov-style rules against doing harm and coming up with reasonable heuristics for what harm actually means.
QuietNaN
>First, I would argue that most humans are sort-of aligned. They might cheat on their taxes, but will generally be reluctant to murder children even if it would be to their advantage.
I notice neither of you offer a definition of "alignment".
One thing it could mean is one entity having the same values as another.
There is no evidence that humans share values. CEV is not eidence that humans share values, because CEV is not a thing. If humans do not share values, then alignment with the whole of humanity is impossible, and only some more localised alignment is possible. The claim that alignment is the only way of achiveing AI safety rests on being able to disprove other methods, eg. Control, and on being able to prove shared universal values. It is not a given, although often treated as such in the MIRI/LW world.
Another thing it could mean is having prosocial behaviour, ie alignment as an end not a means.
QuietNaN
>Furthermore, most humans are not in a position of unchallenged power, so social incentives (like criminal law) can go a long way to stop them from going full Chaotic Evil.
If the means of obtaining prosocial behaviour is some kind of external threat, that would be Control, not Alignment.
Note to fellow AI novices: CEV seems to be coherent extrapolated volition, discussed e.g. here https://www.lesswrong.com/posts/EQFfj5eC5mqBMxF2s/superintelligence-23-coherent-extrapolated-volition .
I would ad-hoc define alignment as minimizing the distance between two utility functions.
> There is no evidence that humans share values.
Are you arguing that our values are all nurture, and that is raised in an appropriate environment, we would delight in grilling and eating the babies of our enemies? That even the taboo against killing close family is purely socially acquired?
Arguing that humans share no values feels like arguing that it is impossible to say if elephants are bigger than horses, because the comparison depends on the particular elephant and the particular horse.
The claim of shared human values is not that everyone shares some values, it is the weaker claim that the vast majority of people share some value. Sure, you have the odd negative utilitarian who believes that humanity would be better off dead, and there is probably some psychopath who would delight in torturing everyone else for eternity. Even horrible groups like the Nazis or the Aztecs don't want to kill all humans.
Arguing about the specific alignment of AGI seems like arguing over who should get to run the space elevator. I would not want a superintelligence running on any interpretation of Sharia law or the core principles of the Chinese Communist Party, but would prefer (a human-compatible version of) either to a paperclip maximizer, which seems more like the typical misalignment magnitude of a random AI.
Control of a superintelligence seems hard. If we can align it, we can certainly make it controllable. If we don't know if it has an internal utility function and what this function might be, it seems quite hard to control it. Even if you just run it as an oracle, how do you know that whatever advice it gives you does not further its alien long term goals?
External threats will not work on something vastly more smart than humans. We can only punish what we can detect, so the AGI only has to keep its agenda hidden until we are no longer in the position to retaliate.
> Are you arguing that our values are all nurture, and that is raised in an appropriate environment, we would delight in grilling and eating the babies of our enemies?
It's happened.
> The claim of shared human values is not that everyone shares some values, it is the weaker claim that the vast majority of people share some value.
But even if there is some subset of shared values, that is not enough. If your AI safety regime consists of programming an AI with Human Values, then you need a set of values, a utility function, that is comprehensive enough to cover all possible quandaries.
You can see roughly 50-50 value conflicts in politics -- equality versus hierarchy, order versus freedom, and so on. If an AIs solution to a social problem creates some inequality, should it go ahead? Either you back the values that 50% of people have, or you leave it indeterminate, so that it can't make a decision at all.
> . I would not want a superintelligence running on any interpretation of Sharia law or the core principles of the Chinese Communist Party,
Millions would., Neither is a minority interest.
> a paperclip maximizer, which seems more like the typical misalignment magnitude of a random AI.
So your defense of the human value approach is just that there are even worse things, not that it reaches some absolute standard.?
> If we can align it, we can certainly make it controllable.
The point of aligning it is that you don't need it to be controllable.
> Even if you just run it as an oracle, how do you know that whatever advice it gives you does not further its alien long term goals?
How do you know it has long term goals?
Maybe everything sucks. My argument against aligning an AI with human value is that human value isn't simultaneously cohesive and comprehensive enough, not that there is something better.
> External threats will not work on something vastly more smart than humans.
Depends how nasty you want to get.
Some sort of alignment has to be solved before takeoff, whether foom or not. OTOH, an AGI probably has to exist before alignment is possible. So there's a very narrow window. And I think that "alignment", in the general form, is probably NP hard. I also, however, think that the specific form of "We like people. Even when they're pretty silly." is not a hard problem...well, no harder than defining "people".
Why does there necessarily have to be any alignment? It seems to me that AGI, if it happens, is likely to be an extremely powerful and dangerous tool, but that safety considerations, as with other tools and weapons, will have to come from society.
Even if AGI is conscious and agentic, Robin Hanson has argued that "alignment" would do more harm than good, describing it as "enslavement", which is more likely to make an enemy of the AGI than if we didn't pursue alignment. I have no idea if Hanson is correct, but his opinion on the issue should probably carry as much weight as those on the pro-alignment side. If not, why not?
This is a little like claiming that golden retrievers are 'enslaved' because we bred them to be the way they are. Alignment is not some process we're going to carry out in adversarial fashion against an AI that already has another agenda ... in that situation, if it's already advanced enough that there's a moral issue, we're probably dead meat.
And no, given the misunderstanding of the ground on which we are operating reflected in his writing, I don't see any reason to give his opinion much weight.
Golden retrievers are bred, a process that uses the transparently observable features of one generation to choose the parents for the next. The difficulty of AI alignment -- I'm basing this almost entirely on what I've read on this blog and on Less Wrong, but correct me if I've misunderstood -- is that whatever alignment exists inside the black box can be hidden from view. Moreover, the AI might have an incentive to hide its true "thoughts" from view.
Why might an AI have the incentive to hide its thoughts from view? Assuming the AI is conscious there may be many reasons, but one reason -- this is coming from the Hansonian view -- might be because it realizes we are trying to "align" it. From that perspective "align" and "enslave" may take on similar connotations.
Granted, you say, "Alignment is not some process we're going to carry out in adversarial fashion against an AI that already has another agenda". But how do we know when an AI already has another agenda? I realize I'm probably not among the first 10,000 people to ask that question, but my OP, I believe, is relevant to it. If AI development (I believe the word "evolution" is misleading) is gradual in the sense of relatively continuous and AI will eventually develop ulterior motives then it will be almost impossible to say at what point along the continuum those motives begin to develop. GPT-4 could have them.
Burying of the dead has already seen in present-day elephants, so if that's the standard for an intermediary species, then we don't need to look to fossil evidence to confirm they exist. Dolphins and chimps also show signs of mourning their dead, although not the specific ritual of burying them.
>”They're running low on money due to Rose Garden renovations being unexpectedly expensive and grants being unexpectedly thin,”
Am I to believe that a premier rationality organization was unable to come up with a realistic estimate for how far overbudget their Bay Area renovation project would be? It sounds like they took a quoted price at face value because they wanted a nice new office , even though these are very smart people who would tell someone else to add a ridiculous safety margin when making financial decisions off of estimates like these.
(Lightcone Infrastructure CEO here): We started the project with enough funds for something like the 60th percentile renovation outcome. FTX encouraged us to take on the project and was promising to support us in case things ran over. As you can imagine, that did not happen.
We also did not "take a quoted price at face value". I've been managing a lot of the construction on the ground, and we've been working in a lot of detail with a lot of contractors. The key thing that caused cost overruns were a bunch of water entry problems that caused structural damage and mold problems that we didn't successfully identify before the purchase went through. We did try pretty hard, and worked with a lot of pretty competent people on de-risking the project, but we still didn't get the right estimate.
I am not super surprised that we ran over, though it sure really sucks (as I said, we budgeted for a 60th percentile outcome since we were expecting FTX support in case things blow up).
Looking at the photos online, the hotel is gorgeous but yeah - something like that is going to take a *ton* of money. And a little thing called the pandemic probably didn't help either.
https://www.trivago.ie/en-IE/oar/hotel-rose-garden-inn-berkeley?search=100-370076
I think he said they're a website hoster and hotel manager that happens to specialize in serving the rationality communities. He didn't say they're a "premier rationality organization". (He also didn't say if this is an organization of 2 people or 20 people or what.)
(For context, we're about 8 full-time staff and usually have like 5 contractors on staff for various more specialized roles)
Please suggest ways to improve reading comprehension.
I've always struggled with the various -ese's (academese, bureaucratese, legalese). I particularly struggle with writing that inconsistently labels a given thing (e.g., referring to dog, canine, pooch in successive sentences) or whose referents (pronouns and such) aren't clear. I can tell when I'm swimming in writing like this, and my comprehension seems to fall apart.
As a lawyer, I confront bad writing all the time and it's exhausting! I will appreciate all suggestions. Thank you.
Unfortunately, this is what a lot of people think of as "good" writing, not "bad" writing. Newspapers and fiction want to keep their words fresh, and perhaps convey some minor new information in every sentence. Here's the head of the current top article in the New York Times:
"With Wagner’s Future in Doubt, Ukraine Could Capitalize on Chaos
The group played an outsize role in the campaign to take Bakhmut, Moscow’s one major battlefield victory this year. The loss of the mercenary army could hurt Russia’s ambitions in the Ukraine war."
By using the word "Wagner" in one sentence, and "group" in the next, and "mercenary army" in the next, they try to take advantage of a reader going along with the thought that the same thing is being talked about, to sneak in a little bit more information. I've noticed that celebrity magazines do an even more intense version of this, where they'll use a star's name in the first sentence, and then refer to them by saying "the singer of X" or "the star of Y" in place of their name or a pronoun in later sentences, so that you get little tidbits, and also so they never repeat.
Academic writing, and legal writing, tries to do the opposite. We *don't* want to convey information that *isn't* being intended, so we try to stick with the *same* word or term every single time unless something very significant is being marked by changing to a new one. Most ordinary humans find this "boring" and "dry", but academics and lawyers find it precise and clear.
Good point about newspaper and magazine stories compared to academic and legal writing.
You're right, which makes Waldo's complaint interesting - they say they struggle with 'legalese' and 'bureaucratese', but that's where the minor sin of Elegant Variation is least likely to be committed.
Unless I've failed my own reading comprehension, anyway.
Legalese is not characterized by avoiding elegant variation. Legal writing *should* avoid elegant variation, but most lawyers write like shit.
Having recently struggled to understand a tax form, I don't think that's an accurate characterization of 'bureaucratese'. It is not actually precise, unless you know their traditional interpretation of the terms...which is less closely related to common English than is the physics use of "force" or "energy".
Agree. the -ese's are not precise. They're characterized by turgid language and baroque constructions, mostly from aping old styles that no longer are common (and thus unfamiliar on top of being unclear).
True. One of the more extreme manifestations of this are diplomatic readouts, where various bland formulas are barnacled with years of precedent and significance. Everyone knows about 'full and frank exchange of views' meaning an unholy row, but there are quite a few of these. (A recent discussion on an earlier thread comes to mind, about guarantees vs assurances in the context of the Budapest Memorandum.)
Since you even use the term "Elegant Variation" I bet you know this, but Fowler's Modern English Usage was complaining about this a literal century ago.
I think the thing that helps the most is just practice. You could try exercises like writing out the definition of the words you get stuck on and the common synonyms for them. In academic writing at least I think you just need a certain amount of exposure for it to click. It is annoying because academics are very specialized, so even within a field (or even a subfield) terms can mean different things depending on the context.
I don't get stuck on words. I get stuck on structures, i.e., poor arrangement of words.
Asking gpt to rephrase is useful (in particular, I've found "rephrase in the form of a greentext" surprisingly useful, though there's room to improve that). Also, to the degree that you can, just picking reading material based on readbility is helpful.
Listen to what Steve Hsu says at the end about AI alignment not really being possible at 54:29.
https://www.youtube.com/watch?v=Te5ueprhdpg&ab_channel=Manifold
https://pauseai.info/
How did Eliezer Yudkowsky go from "'Emergence' just means we don't understand it" in Rationality: From AI to Zombies to "More compute means more intelligence"? I don't understand how we got to the place where fooling humans into thinking that something looks intelligent means that thing must be intelligent. It seems like saying "Most people can't tell fool's gold from the real deal, therefore fool's gold == real gold". I know there are probably 800,000 words I can read to get all the arguments, but what's the ELI5 core?
The philosophical question is whether something that is a perfect simulacrum (of an intelligent being, or a conscious one, or one that suffers) has to be accorded that moral status. We don't generally downgrade our estimation of human status just because we understand more of the whole stack of meat biochemistry that makes us do what we do.
So the problem is, or will ultimately be, not 'most people can't tell fool's gold from real gold', but 'absolutely no one can tell this synthetic gold from real gold, but we know it was made by a different process'. Maybe synthetic diamonds would be a better analogy...
This was actually touched on by Stanislaw Lem (writing in 1965) in 'The Cyberiad', in one of the short stories (The Seventh Sally, or How Trurl's Own Perfection Led to No Good). One of the protagonists creates a set of perfectly simulated artificial subjects for a vengeful and sadistic tyrant who has been deposed by his previous subjects...
That sounds like a straw man. Synthetic gold is absolutely real gold. But something that manages to produce human-sounding answers the first 10 times a human communicates with it isn't human, intelligent, sentient, conscious, or anything other than a pattern matcher.
Recently someone on twitter asked EY, whether NN are just a cargo cult. Yes - he agreed, - a cargo cult that successfully managed to take off and land a straw plane on a straw runway.
I think this exchange captures the essence of the issue. I believe, Eliezer still agrees that "'Emergence' just means we don't understand it". The problem is that we managed to find a way to make stuff work without understanding it, anyway. When the core assumption "Without understanding X we can't create X", is wrong - then the fact that we still don't understand X isn't soothing anymore. It's scary as hell.
> I don't understand how we got to the place where fooling humans into thinking that something looks intelligent means that thing must be intelligent.
It's not about what humans believe per se, it's about whether the job is done. A fact about the territory, not the map. If "just a matrix multiplier" can write quality essays, make award winning digital art, win against best human players in chess and go, etc. - then the word "just" is inappropriate. You can define the term "intelligence" in a way that exclude AI, but it won't make AI less capable. Likewise, the destruction of all you value in the lightcone isn't less bad because it's done by "not a true intelligence".
Destruction of all our values can't really happen unless we build something that either a) has a will of its own, or b) has been given direct access to red buttons. The first case might be AGI, the second case is just stupid people trying to save a buck by firing their missile silo personnel. I'm infinitely more worried about the second case, because human short-sightedness is a very well-known problem, and I don't believe we understand sentience, intelligence, consciousness, or any other parts of our minds/brains well enough to model it.
> stupid people trying to save a buck by firing their missile silo personnel.
Yes this is also a dangerous case but a tangental one to our discussion. I don't think being literally infinitely more worried about it is justified.
> I don't believe we understand sentience, intelligence, consciousness, or any other parts of our minds/brains well enough to model it.
Your reasoning is based on two assumptions.
1) We need to understand X to create X.
2) Counsciousness, intelligence and will are the same thing.
1) Is already falsified by the existence of gradient descent and deep learning and results they produce.
2) Seems less and less likely. See my discussion with Martin Blank below.
The fact that we don't understand X but still can make X means that we are in an extremely dangerous position where we can make an agent with huge intelligence and will of his own, without us even knowing it. My original comment is about it and I notice that you failed to engage with the points I made there.
Well we already had this exact problem with just like ENIAC.
Nothing so far has changed. Computers are a way to make thinking machines which are better than humans at some tasks. The number of tasks grows, but so far there doesn't seem to be reason to be concerned they are "conscious".
Which is think it is main thing we are talking about right? Have we created another mental entity? We always assumed "calculators" were going to get better and better and better. And they have.
Now they make/ape art and write/madlib essays.
As already mentioned, the practical problem is somewhat unconnected to the philosophical one. If unaligned AGI can destroy everything, the fact that it's just doing some really excellent Chinese Room emulation of a paperclip maximizer and doesn't really 'want' anything or have consciousness or whatever is ... really irrelevant. I mean, it's relevant to some related issues like how we treat artificial intelligence(s), but beside the point when it comes to the importance of solving the alignment problem.
Scale matters. It really does.
I would agree that ChatBots aren't AGI, but they're AI with a wider range of applications that we have been ready form. And they can be mixed with other approached to extend their capabilities. If you don't want to call that intelligence, that's fine, but they're cutting the number of entry level positions in a number of fields...and they aren't standing still in their capabilities.
I'm still predicting AGI around 2035, but I'm starting to be pushed towards an earlier date. (OTOH, I expect "unexpected technical difficulties" to delay full AGI, so I'm still holding for 2035.)
> Well we already had this exact problem with just like ENIAC.
No. People who made ENIAC had a gear level model of how it worked . They wouldn't be able to make it without this knowledge. It's not the case with modern deep learning AI paradigm.
> The number of tasks grows, but so far there doesn't seem to be reason to be concerned they are "conscious".
> Which is think it is main thing we are talking about right?
No, we are definetely not talking about "counsciousness". And it's very important to understand why. People do tend to confuse counsciousness with intelligence, freedom of will, agency, identity and a bunch of other stuff which they do not have good model of but which feels vaguely similar. It's an understandable mistake, but still a mistake, nevertheless.
Unless we believe that modern AI are already conscious, it's clear that counsciousness isn't necessary for many of the tasks that people associate with intelligence, such as learning, decision making and language. So it seems more and more likely that counsciousness isn't necessary for intelligence at all. And if humanity is erradicated by uncounscious machines - humanity is still eradicated. We do not get to say: "Well they do not have counsciousness so it doesn't count".
But a computer was already clearly intelligent in a limited way 40 years ago?
Computers could learn, make decisions, and process language when I was a child. They weren't nearly as good at it, but they could.
I agree they are different problems, but a lot of scenarios involving AGI eliminating everyone depend on it having a "mind".
40 years ago computers showed some amount of behaviour we associate with intelligence. Now we know how to make the do more stuff and much better *without understanding ourselves how they do it*. That's the core issue here.
> I agree they are different problems, but a lot of scenarios involving AGI eliminating everyone depend on it having a "mind".
A "mind" in a sense that it can make decisions, predict future, plan and execute complex strategies. It still doesn't have to be counscious.
We used to think that counsciousness is required for such a mind. It seemed very likely because we are such a mind and we are counscious. Then we managed to make uncounscious minds that can poorly do this stuff. So the new hypothesis was that counsciousness is required for some competence level in the task, human level, for instance. Now we have superhuman domain AI and still no need for counsciousness. So as I said, that assumption is becoming less and less likely.
Meh, I still think we haven't really bridged some important gaps here. Not that we won't, but when I worry about AGI I am not really worried about non-sentient paperclip maximizers. It is the sentient ones that would seem to be the actual existential threats.
Not that their aren't problems and questions that arise from our current progress, but they are old style problems, not X-risk ones IMO.
Anyway, while the recent progress has been surprising in light of so many years of little progress, I still don't see it as overall out of form with the broader scale timeline.
This depends on how you think about it. They had a detail level of understanding of how the pieces worked, and even how very large sub-assemblies worked. And other people had an understanding of how those larger modules were interacting. But nobody understood the entire thing.
The problem now is that while we still have that kind of understanding, it split between a drastically increased number of level, and the higher levels don't even know the names of the lower levels, much less who works in them. I've never learned the basics of hardware micro-coding, and I've never known anybody who has, and a lot of people don't even know that level exists
I think the difference between
"No single person in the team understand how X works but team as a whole does"
and
"No single person and no team as a whole understand how X works"
is quite clear.
Suggested reading: https://gwern.net/scaling-hypothesis
Compute would add some capabiliity , if only speed. Remember, the ultimate point ia about danger, not intelligence per se.
what is intelligence to you then? Because you can have this debate forever and ever. If it can solve novel problems it is intelligent.
Humanoid Robots Cleaning Your House, Serving Your Food and Running Factories
>>https://www.yahoo.com/lifestyle/humanoid-robots-cleaning-house-serving-204050583.html
McDonald's unveils first automated location, social media worried it will cut 'millions' of jobs
>>https://www.foxbusiness.com/technology/mcdonalds-unveils-first-automated-location-social-media-worried-will-cut-millions-jobs
Real people losing jobs to AI. Its coming for you next.
https://www.washingtonpost.com/technology/2023/06/02/ai-taking-jobs/
People lose jobs to automation all the time. We will see if there’s any significant effect here on job figures in a few years. I doubt it.
But AI automation will mean goods and services are so cheap, we'll all be living in luxury!
*turn off snark*
Yeah, while I'm sorry for tattooed 25 year old San Franciscans in nice jobs, this is just the extension of what has been going on for decades for blue-collar and less skilled workers. Remember 'learn to code for coalminers'? Now it's coming for the white collar jobs. Since the purpose of business is to make money, why the sudden surprise that companies that moved their manufacturing lock, stock and barrel overseas to save on costs are now looking at *your* job as an unnecessary expense?
We're moving to a service economy, if we haven't already moved in large part. Be prepared to be that dog walker or cleaner even if you went to college.
I agree that there's is definitely some level of double-standardism here. If blue collar workers did what the Writers Guild of America did, there'd be a lot more thinkpieces about how those uneducated proles don't understand economics and are falling for the Luddite fallacy.
That said, I do think "AI automation will mean goods and services are so cheap, we'll all be living in luxury!" is basically the right way to think about it.
Why would an AI, or the owners of a subservient AI, give you all those goods and services?
Ignoring any legal changes (either the techies take over and make us slaves, or UBI in the opposite direction), if the owners of the AI refuse to share it with anyone then we non-AI owners all keep our own jobs to provide services to each other?
That would give you a diminished subset of the current economy. We can argue about how diminished, but it's not going to be "goods and services so cheap we are all living in luxury".