Hello: I just now happened on your review of the book Jaynes ought to have written, and am very glad to have. Now reading "On the Failure of Oracles" at U of Chicago's web site. thanks!
I’m looking for a small town to write a novel from. It has to be in the mountain west (CO/WY and surrounding area), ideally has a small town and good surrounding nature to go on hikes or runs. I’d rather avoid somewhere like Jackson Hole or Aspen because I expect these would be overrun with tourists, but unpopular places are by construction hard to find. Does anyone have any suggestions?
"To add to this: just learned that MIRI (then SIAI) was literally founded to *speed up* the coming Singularity, to prevent the gray goo apocalypse that Yudkowsky circa 2001 was convinced would kill us otherwise:
'On the nanotechnology side, we possess machines capable of producing arbitrary DNA sequences, and we know how to turn arbitrary DNA sequences into arbitrary proteins (6). We have machines – Atomic Force Probes – that can put single atoms anywhere we like, and which have recently [1999] been demonstrated to be capable of forming atomic bonds. Hundredth-nanometer precision positioning, atomic-scale tweezers… the news just keeps on piling up…. If we had a time machine, 100K of information from the future could specify a protein that built a device that would give us nanotechnology overnight….
If you project on a graph the minimum size of the materials we can manipulate, it reaches the atomic level – nanotechnology – in I forget how many years (the page vanished), but I think around 2035. This, of course, was before the time of the Scanning Tunnelling Microscope and “IBM” spelled out in xenon atoms. For that matter, we now have the artificial atom (“You can make any kind of artificial atom – long, thin atoms and big, round atoms.”), which has in a sense obsoleted merely molecular nanotechnology – the surest sign that nanotech is just around the corner. I believe Drexler is now giving the ballpark figure of 2013. My own guess would be no later than 2010…
Above all, I would really, really like the Singularity to arrive before nanotechnology, given the virtual certainty of deliberate misuse – misuse of a purely material (and thus, amoral) ultratechnology, one powerful enough to destroy the planet. We cannot just sit back and wait….'
And he made this incredible prediction:
'Our best guess for the timescale is that our final-stage AI will reach transhumanity sometime between 2005 and 2020, probably around 2008 or 2010.'"
I started experimenting with betting on prediction markets a few weeks ago and decided to make my first market recently and I had some questions on getting the most out of them:
1. Does anyone have any reading to recommend on best use cases? i.e. where does wisdom of the crowds "fail" or trend towards a wrong answer as N approaches infinity? Maybe better phrased would be: Where—if anywhere—does expert opinion reliably outperform the equilibrium point reached by wisdom of the crowd?
2. Are there any forums or communities where people who make markets can request extra engagement on matters they think are important to try and increase the predictive validity of the market? Alternatively are there ways of doing this with Manifold or Metacalculus by spending money or some other mechanism?
I am offering a mini-grant(s) to any individual [ideally who I could pay through a 501 c(3)] that would use the money for any one of the following:
1. Filling otherwise unmet need for high-IQ people ages 5 to 20 for whom the grant to you is likely to help him/her/them live up to their potential to make a difference in their current sphere of influence or the larger society. "Make a difference" can be in a progressive, conservative, or apolitical direction.
2. Encouraging discussion of pro-merit issues, e.g., the net negative of yet more redistribution of money and attention from people with greater potential to contribute to a better society to those less likely to. Like the other two foci, this must be used for an initiative that would otherwise go unfunded.
3. Taking a worthy step toward understanding the biological basis of reasoning or impulse control that would otherwise go unfunded.
Email me a brief proposal saying: 1 What you would do with the money, 2. What makes you a person likely to use the money well. 3. What would be the amount of the grant that would yield maximum benefit per dollar I'd give you. 4. Whether I could send the grant money through a 501c(3.) Send your proposal to me at mnemko@comcast.net
"Nope, Easter is Christian, because that's when Jesus was crucified and rose from the dead. (Technical details on how the exact day was calculated.) If other religions celebrate the same day, it's because 'equinox' and 'end of winter' are universally interesting things. I did a lot of research on the goddess Iostra, and the spelling is slightly different. Anyway, she is probably made up. Also, maybe it's the other way round, and the pagans worldwide emulated the holiday of the local Christian communities. Easter eggs are a medieval invention, probably started as: 'the Lent is over, now you can eat eggs again'. Easter bunny was invented in 17 century as a rationalization for the eggs. People say a lot of crazy stuff on internet; if you are really curious about the history of religion, get a PhD in Theology."
I'm not usre to follow. If you can't use transcribe because of GDPR (and not because it barely works), then why would another tool not be restricted in the same way?
The two main problems with the teams version is that when you want to fully transcribe the meeting you have to record names and record video too.
Whereas I'm hoping for another app that transcribes the meeting from audio alone without recording people's names, while still recognising distinct voices. This way, no 'personal' data (name and face) is gathered, but I can still record all the details from the call.
There are tools to record selected parts of the screen. Perhaps you can just record the video in a narrow box around the captions, with no audio or rest of the screen?
Won't the people crying out about AGI killing off humanity look dumb in ten years if it hasn't happened yet, and also doesn't look like it's about to happen?
Think about your dad's reaction if you play Russian Roulette _once_. Won't he look stupid that you survived (in 5/6 cases). Doesn't make it even one iota less stupid.
It's a different question from what OP asked. For a water pistol a reaction would be dumb even before firing, because there is no question of the outcome. Having to wait 10 years to see what happens implies a degree of uncertainty.
To take your metaphone back to the main topic, you're suggesting not only that we'll be ok, but that there is absolutely no possibility of harm from going bullish on strong AI, not matter what. This is, well, a hellof an argument to make, and it should take proportionally more evidence.
"Will I look like an idiot in 10 years If I spoke my mind" is a great, teenager-inspired, way to never do or say anything of value, and still look like an idiot, because not doing or saying anything is equivalent to doing and saying whatever the mainstream around you does and says, which is also not immune to being wrong and ridiculous.
Given enough time, every idea is dumb.
There is no workaround to being wrong, you have to be wrong in order to be right. I also think that AGI doomers are wrong, but I don't approach it from the ridiculous angle of "Right Side Of History"ism, I just say out loud why I think they are wrong.
What the heck is point of this comment? Trying to shame people into thinking that AGI won't be catastrophic for humanity? If you genuinely think that AI will kill humanity, the risk of 'looking dumb' is a negligible cost.
"Won't the people saying AI isn't a risk to humanity feel like morons as they're dying at the hands of an AI?"
And what are you even saying? That they'll look dumb if and only if AGI exists in ten years time? Or if AGI doesn't exist by then they'll also look dumb?
If you have an argument to make against the AI risk case, make it, but 'you're gonna look so dumb when youre wrong' is just childish.
Kinda rich coming from the ideology that dresses itself in 59 Kg of pearls and spends long days and nights clutching them about **Checks Notes** the unavailability of Drag shows for kids.
All "pearl clutching" really means is "taking something more seriously than I think it should be taken." I guess it would be a witty and amusing way of making that point, if it weren't such an stale internet cliche at this point.
Ironic, considering wokeness IS pearl clutching, and it's the woke constantly freaking out about things.
"Yeah I don't think assaulting people and burning down people's small businesses across the country in the name of black nationalist outrage is a good thing"
I remember when climate change was going to have New York City underwater by 2020. Or was it 2010? Either way, when doomsday predictions don't happen on schedule the people who make them don't reconsider, they just push the date out any deny ever having made the old prediction.
"Irene-like storms of the future would put a third of New York City streets under water and flood many of the tunnels leading into Manhattan in under an hour because of climate change, a new state government report has warned.
Sea level rise due to climate change would leave lower Manhattan dangerously exposed to flood surges during major storms, the report, which looks at the impact of climate change across the entire state of New York, warns.
"The risks and the impacts are huge," said Art deGaetano, a climate scientist at Cornell University and lead author of the ClimAID study. "Clearly areas of the city that are currently inhabited will be uninhabitable with the rising of the sea."
Factor in storm surges, and the scenario becomes even more frightening, he said. "Subway tunnels get affected, airports - both LaGuardia and Kennedy sit right at sea level - and when you are talking about the lowest areas of the city you are talking about the business districts."
The report, commisioned by the New York State Energy Research and Development Authority, said the effects of sea level rise and changing weather patterns would be felt as early as the next decade."
Then they shifted that out in 2019 to happen "by 2100":
"Turns out sea levels may be on track to rise by more than double the unsettling figure climate scientists previously projected, and all within a century. A new study, released Monday, predicts a rise of 6.6 feet by 2100, if global temps warm by 9 degrees Fahrenheit. In that "worst-case scenario," according to CNN, hundreds of millions of people—including residents of New York City—would find themselves displaced as their homes sink underwater."
As stubydoo noted, the first article wasn't saying New York would be underwater by 2020, it was saying IF another Irene-level storm hit the flooding would be extra bad. That hasn't happened, and a quick Wikipedia look seemed to indicate that flooding from hurricane near misses has indeed been pretty severe.
That's jumping between two different aspects - general sea level and isolated flooding events. The 2011 report was about the flooding situation ***during major storms***. Major storms are random events that you can count on hitting somewhere, but the fates may send them to a different spot instead of yours. As far as I recall, New York City didn't get any major storms around the circa ~2020 timeframe. There was one that hit in 2012, and that one certainly did flood a sizable chunk of the city and many of the tunnels (they're still fixing the tunnels now)
Rightly concerned, of course. The problem is that at the time many targeted minorities felt the concerns were overblown, and left only through repeated prompts. All too many actively chose to stay. It didn't help that many countries were actively hostile to migration in a way that made such alternatives look no better (according to letters that survive). Even the US route was only available to those with connections and money. I'm not sure invoking this tragedy in a Godwin's law kind of way is useful; it's not a trump card? You perhaps meant to make a more nuanced point but didn't have time to expand it at the time, but I would be interested to read it.
No, my point ain't all that nuanced. It's that sometimes people saying a terrible event is coming are right. I realize that those saying something like the Holocaust was coming were not strictly speaking prophets of doom, just predictors of terrible things happening to certain classes of people. But of course many people speaking up with great concern about rapid AI developments are also not predicting certain doom. Scott, for instance, places the probability of FoomDoom at 33% if we do not take steps quite soon to stop AI development and make progress on AI alignment. Zvi's prediction is about 50% I believe. Yudkowsky is the only person I'm aware of saying it's too late to intervene and FoomDoom is now certain. Does that make those whom he's convinced cult members? Seems like being a cult member requires more than being convinced. You'd have to be preoccupied with the cult's leader and obedient to them, and also willing to cut your ties with those who do not believe as you do. I expect there are some Yudkowsky fans who think that way, but I don't have any reason to think it's most of them. There are probably people whom Yudknowsky's writings have convinced who had no contact with him or others who believe him to be right. They're just off living their lives, trying to come to terms with this awful truth they've been convinced of. Or take me. I am convinced I do not understand AI well enough to make a valid prediction of how likely FoomDoom is. So I have decided to go by Scott and Zvi's read, because they are smart people in whom I feel a reasonable degree of trust. But I'm sure I'm not a Scott cultist. In fact my last post about how to reduce the risk really irritated him, and far from bowing to the master, I just argued back.
I'm referring to the psychology of doomsday cults in response to someone who thinks "looking silly" is a consideration, to try to point out that this is the wrong framing. As far as I can tell there are some aspects of that psychology that apply to the current discourse about extinction risk.
However I should probably have expanded my comment. I am not trying to dismiss concerns about extinction risk from AI, because I share some of the worry, and extinction is in my personal Overton window. I believe that AI is creating great upheaval and will likely have major destructive effects, even if less directly than an extreme FOOMer position predicts.
Actual doomsday cults tend not to say things like "We all might die, and there's nothing anyone can do to save themselves; there's no grand meaning to the apocalypse, or anything good about it in particular; no reason to be excited, no deep mystery, no spaceship coming to save you, no battle between right and wrong, almost all potential ways out haven't panned out, and the remaining ways of increasing chances of survival are deeply ideologically repulsive to the group's (mostly technophile) members; there will never even be so much as a moment to brag about being correct before death, the future is likely just sad, you should try to live well in what time left you have". The psychology doesn't fit the model you're suggesting here.
> [Mocenigo] uses cutting-edge AI to calculate the expected utility of long-term human survival versus projected suffering over the next few years. As he suspected, its answer is negative. Mocenigo opens his lab fridge and snorts a vial of Anthrax Leprosy Mu, a hybrid pathogen he’s developed. It has an R0 of 14,000 and is uniformly fatal after a month-long symptom-free incubation period. He heads out of the lab to the football stadium nearby that has been converted into a refugee center.
If people start thinking utilitarianism is true, well, someone could end up with their calculus telling them human extinction is the best outcome, as happened there. Making it one of the most dangerous philosophies developed, since unlike other philosophies this one cloaks itself in math, making it have a very powerful allure to people with a certain psychology. The sort of person who could figure out how to actually cause human extinction.
Might be interesting to draw up a catalog of philosophies and ideologies that could justify human extinction, so some heavy duty philosophical work could go into refuting them all.
I think once you take into account the possibilities for causing human extinction non-deliberately you'll find such a catalogue much less useful. After all, accurately predicting how your actions may contribute to humanity's extinction is probably about as hard as accurately making utilitarian calculations
Well, philosophy cannot do all things, but refuting dangerous philosophies seems like a worthwhile task for it. I don't think a super high level of certainty is needed to proclaim "this action does not contribute to humanity's extinction". Was beating the Nazis something that accelerated human extinction? "Oh, a capitalist economy accelerates technological development, which increases all sorts of X-risks, therefore, it would have been better that the Allies lost. And if the Allies lost, maybe there would have been no nukes." Some level of risk has to be accepted in every decision.
For what it's worth, I do think the birth of science may have been a colossal mistake. It increased our power without a concomitant increase in the wisdom to handle that power. It arrived too early basically.
I just don't feel there's much grounding for the claim that one philosophical system is more likely to lead to extinction than another. Something akin to that example for utilitarianism can be concocted for basically every one, and I see little hope of quantifying which are most likely
It wouldn't be based on quantification. Every philosophical view can be used to argue for extinction? You see this in virtue ethics? What is the path there? In communism? In capitalism?
The easy paths involve situations in which things *like* nuclear exchanges are deemed lesser evils. But ultimately everything that utilitarianism does quantitatively can be done qualitatively. Actually writing the full path for a specific philosophy is left as an exercise for the reader - sorry, but I don't find it wise to explicitly lay that out right now, and also I'm tired
It was worthwhile to start the Civil War to end slavery. This might be a similar situation. The global south will survive total nuclear war anyway. But I think this can be handled without WWIII.
Even if we are willing to discount the complexities of the civil war for the sake of the argument, I do not see the analogy with today’s situation. Slavery was not, per se, an advantage in defense, but AI is very likely to be. So war (or the mere threat of war) may increase the pace of AI adoption, leading to an arms race scenario. This is probably the worst possible outcome.
The analogy is that slavery was a moral error, and advancing AI capabilities without solving alignment is also a moral error. For what it's worth, I think we are pretty far from the point AI is necessary to win a war, and an arms race was going to kick off on its own anyway once AI is seen to provide a decisive advantage. The CCP knows USG wants it gone: AI doesn't change that.
There is a view that wars start because of a grievance of some kind plus the inability to reach a peaceful agreement, typically due to incomplete information. So fighting is in part a learning process. It is hard for me to believe that a major country would start a war over something so silly as a ‘moral error’, whatever that is. Perhaps they would use that as a cover story later on, to justify their choice to their own public opinion. Now if the ‘moral error’ is also a potentially dangerous weapon, I would expect a preemptive attack, with later public discourse spinning it as the right thing to do for the good of mankind or something. But if the AI is already in control on both sides and it has a way of figuring out whether it will win or lose without fighting, it might even choose not to fight. Unless it is aligned, ironically.
That's a lousy argument on its own. There are many moral errors in the world that aren't worth fighting a war over.
Communism is a moral error, is it worth invading China? Probably not, but I'd at least want to see a cost-benefit analysis before we try. Iraqi Baathism was a moral error and the cost of that one turns out to have been dubiously worthwhile.
As for slavery, if only the US had followed the example of every other freaking country in the Americas and peacefully and gradually outlawed slavery (while paying adequate compensation to the slaveowners so they weren't too upset) I think that would have been a vastly better alternative.
There is a comment thread here about why Stormy Daniels has not/is not being charged with blackmail related to the events that have gotten Donald Trump indicted. I have new info to offer and for ease of reading will summarize here.
The question basically is, "isn't she guilty of illegal blackmail/extortion?"
Some of us responded to note that hush money payments aren't illegal under federal law. U.S. law (see 18 U.S.C. § 873) includes that the thing being threatened for exposure has to be something that was illegal. Not simply embarrassing or politically inconvenient but, illegal. Consensual banging with a porn star is not illegal.
Trump though is being indicted by the State of New York, and some folks pointed out that blackmail under most state laws does not require that the thing being threatened for exposure be an illegal act. State laws against blackmail cover the instilling of fear more generally, including not just threatening to do someone harm and/or exposing a crime but also threatening to "expose a secret" which would induce fear in the person being blackmailed (that's a quote from the New York statute).
So, then, why isn't Stormy Daniels potentially guilty of illegal extortion? That question turns out to have been posed many times since 2018 in forums like Reddit and Quora and etc, with a variety of attorneys/law professors/former prosecutors/etc jumping in to respond to it. Their consensus answer is that there isn't any allegation that Stormy Daniels has attempted any extortion as defined in any state laws.
Daniels didn't approach Trump or any Trump representatives to demand money for silence. Rather, she was starting to talk in public about having had sex with Trump and was then approached and (she alleges) threatened with harm if she didn't sign an NDA drafted by Trump's lawyers and accept the payment in exchange. She signed the NDA, received the money, and then Trump failed to sign the document. Later she sued in civil court to invalidate the NDA because of his failure to sign it; she did not in that lawsuit seek any damages or other new payments but simply asked the court to agree that the NDA was not in force.
So, quoting here many lawyers who've posted responses online on this topic during the past four years, "none of the elements of legal blackmail (on Daniels' part) exist."
I’m not sure which will be more entertaining, MSNBC’s or Fox News coverage of Trump’s booking. Maybe I’ll record one of them so I can compare and contrast. As usual, I’m kidding. I won’t be watching either. I saw the OJ low speed chase by accident in an after work bar gathering. That’s enough reality TV for one lifetime.
Doesn't adopting the longtermist frame imply being pro-life? Been reading a bit of The Precipice, and they sure harp a lot on the value of unborn generations. If these generations are worth a lot, then isn't the future life of the fetus also worth much?
Maybe you can counter, what about the future life of the gametes? But gametes do not turn into people all on their own, like a fetus. Once fertilization occurs, you have something that will almost certainly become a person, should no one intervene. And if future life has value...
This only works if you think that people are morally obligated to maximize total utility and don't put any value on human freedom.
This would also imply forcing people to breed. If we're going to go down that road, better to force everyone to do it instead of placing all the burden on the dumber half of society.
I'm not even sure it works if you assume that! I put no final weight on human freedom - but I think that human freedom is an incredibly valuable thing for maximizing total utility (because usually giving a person freedom gives them the ability to do whatever maximizes their own utility, and they will usually do so better than someone else deciding for them).
While forcing people to breed might be a "greedy" solution towards maximizing the number of happy lives, it does a much better job at maximizing *number* of lives (short term, at least) and a much worse job at making them happy (and over the long term, those lives may not end up very well if they are mostly forced rather than chosen, and may end up with fewer people in the second or third generation if the first generation goes particularly badly).
Not necessarily - a longtermist may believe that future generations have value, but that doesn't mandate that value to always trump value/rights/interests in the present.
Its a value/value balance, not a blind yielding of the present to the future. Otherwise you'd end up with crazy longtermists calling for some kind of pregnancy autocracy where we throw human rights out the window and mandate annual pregnancy for all women in order to maximize the size of future generations at all costs.
They are definitely arguing we should be heavily valuing the future. That's not the same as having the future overthrow the present, but then, neither is having a child and giving it up for adoption an overthrow of the present by the future. It would just be including the future in your decision making, instead of thinking only of yourself.
And (just to further complicate the future dimension), most women who get abortions already have at least one child.
"Six in 10 women who have abortions are already mothers, and half of them have two or more children, according to 2019 data from the Centers for Disease Control and Prevention"
So it's not just the future of the pregnant woman being considered, but their limited resources for providing and the impact another mouth would have on the futures of the kid(s) that they already have.
The longtermist frame implies doing things that will increase the general population growth rate, provided that they don't correspondingly decrease the economic growth rate. I don't believe that legal availability of abortion does that much to change the population growth rate (though if someone has good data showing that it does, that would be interesting to know). My understanding is that the availability of abortion makes it safer, and ends up with nearly as many babies born, just that more of them are at times in the parents' life when they are happy to support them.
What I read in The Precipice is that we should be valuing a lot what future generations will accomplish. That's the impulse behind it, not specific implementation details like a balance of the population growth rate and the economic growth rate. Similarly with abortion, if you start valuing what the unborn fetus will accomplish in the future, suddenly the whole thing starts looking very different.
And the only choices are not abortion and raising the kid yourself: newborns given up for adoption are snapped up fast. So it's not really an out to throw out a fetus' entire future life just because you don't want to be inconvenienced for 9 months.
Abortion only makes sense if you think the future life of the fetus has no or insignificant value. And the longtermist frame is all about valuing the future.
I think you're missing the very specific *quantitative* core of longtermism. Nearly *all* ethical views say that we should care about the future. The distinctive feature of longtermism is that it says there's so many *more* people in the future that we should care about the future *more* than the present - enough so, that decisions about the present might generally be better if they are made in light of their effects on the future rather than their effects on the present.
Given this perspective, it's a mistake to evaluate an abortion just by looking at the effects on the one fetus that is denied a chance at life - you instead need to evaluate the abortion by looking at the effects on the huge number of people in the future. It's all about population growth rate and economic growth rate, and not about missing the forest for looking at one tree.
If you value what the unborn fetus might accomplish in the future, but *also* value what the pregnant person might accomplish in the future if they weren't pregnant, and *also* value what might be accomplished by a future fetus that this person might choose to carry to term if not this one, then it all becomes pretty complicated.
But it becomes a lot more simple if you also look at the impact on the future of laws criminalizing abortion, apart from their effect on the abortion itself.
In lieu of a long-winded answer, you might find it interesting to ask yourself the contrapositive (does not being pro-life (or being pro-choice) entail that future life has no value?).
A pro-choicer is definitely saying it has very little value, certainly much less than the longtermists are saying. Perhaps this is not the same as literally no value, but it is a distinction without a difference, since no value or little value, the outcome is the same.
I remember a pretty sinister pro-choice argument about how if someone needed to be attached to your body for nine months to live, you would be entitled to cutting them off and letting them die. In that one, we're not even talking of future life any more, but of real, present life, and even that is being devalued. But I'm not sure how many pro-choicers actually endorse that argument.
Does this mean abortion should be banned? Maybe not federally, but I'm ok with letting states do it. I do feel abortion is ignoble in nearly all cases. There would be no abortion if we were all we were meant to be. But this has no bearing on whether longtermism implies pro-life, just decided to clarify my general position on abortion.
Maybe one can be a pro-choice longtermist if one says individual future lives have little value, it's just that there are potentially so many of those that it adds up to a hefty sum, but that's an utilitarian argument, and utilitarianism may itself be an X-risk (https://betterwithout.ai/rollerskating-transsexual-wombats), so I don't think I buy it.
I think the point of the "violinist arguments" you are alluding to is precisely the opposite: they aim to demonstrate that even if the fetus' life had extremely high value (the same value as an adult person's life), there should still be a right to abortion. Whether that argument succeeds or not, I think there are quite a few pro-choicers who believe that fetal life has significant value and even that many abortions are immoral, but that there should still be a right to abortion (rather uncontroversially people enjoy many rights that allow them to do immoral things).
I am not convinced that this is not completely orthogonal to longtermism. Longtermism makes comparisons and value-judgments over aggregates such as generations of (potential) morally valuable beings. To do this neither does one have to commit to a point in the biography of these beings at which they gain moral status nor do comparisons between aggregates necessarily imply anything about comparisons between individuals as in the abortion case.
You have to commit to these beings having value even before they have a biography, that's the longtermist frame. And each individual being has to matter for this to make sense: multiplication by 0 yields 0.
> rather uncontroversially people enjoy many rights that allow them to do immoral things
Yeah, but we don't live in an anything goes society because that's insanity. In abortion, what's in the balance most of the time is 9 months of inconvenience versus an entire lifespan. It's obvious where the scales have to tilt. So I'm coming round to it should be banned, with exceptions if the pregnancy is life threatening.
Longtermism is not a moral philosophy in its own right but a moral philosophical thesis that roughly states "future generations matter morally and they matter a lot because they could consist of so many morally relevant beings". This is only a thesis about aggregates and does not have to make a commitment about the underlying moral philosophical framework as long as it gives some consideration at all to quantity.
Questions of the morality (or legality) of individual acts such as abortion depend on your moral philosophy not on whether you assign moral weight to future generations. So if e.g. you consider the right to abortion a moral positive then it will follow that in your version of longtermism future societies should have a right to abortion. But you might just as well have the contrary position.
>>I remember a pretty sinister pro-choice argument about how if someone needed to be attached to your body for nine months to live, you would be entitled to cutting them off and letting them die.
Why do you think the argument is "sinister?" It seems to me to be self-evidently just. I mean, the only alternative would be a government empowered to force you to maintain the connection against your will, which strikes me as a cure many times worse than the disease. Wouldn't such a government, just by way of example, also be able to forcibly remove one of your kidneys, or some of your plasma or bone marrow, in order to make you donate them to a person in need of a transplant?
The "violinist argument" is a transparently pathetic twisting of the object level situation, in actual real life (almost) nobody kidnaps you and puts a baby inside you by force, rape constitutes less than 0.2% or so of reasons for abortion in the US according to a 1998 meta study[1], it's not even listed under its own category, but bundled into an "other" category.
What actually happens is that the abortion-seeking woman have sex, completely willingly, then treats abortion as a convenient contraceptive to clean up her mess.
By consenting to sex, you consent to its consequences. Each and every single one of them. Nobody kidnapped you to put the violinist inside you, you went out of your way to invite them in. The violinist didn't even exist before you invited them, your invitation literally created them out of thin air and made them dependent on you for just 9 months, which you then deem too much of a cost to your potential wage slavery - oh sorry, "career" - opportunities and decide to kill them for an extra promotion.
"IS IT REALLY FAIR TO BEAR THE CONSEQUENCES OF MY OWN ACTIONS ?!!!", apparently yes, I would say so, lots would agree with me too. The violinist argument is an unconvincing tortured metaphor to try to trick a reader into saying no. Who on Earth ever changed their mind after hearing it ?
>>By consenting to sex, you consent to its consequences.
I think my note elsewhere in this chaining is relevant so just gonna ctrl+c rather than rewriting the wheel.
>I'd argue that consent to sex and consent to pregnancy are overlapping but distinct. To illustrate:
>A pro-life person can argue that consent to pregnancy has a particular quality - namely that it cannot be withdrawn after being granted and before completion of the act. I think it's putting it lightly to say that an argument that consent to sex shared that same quality would yield a different response from the one observed in the context of pregnancy. So "consent to sex" and "consent to pregnancy" aren't the same thing - if they were, we wouldn't see such dramatically different responses to the question "can the consent be withdrawn?"
And if consent to sex =/= consent to pregnancy, then this whole line of argument dissolves.
This is sophistery, the "Consent To Sex" I'm talking about in my comment isn't the literal words "I consent to sex", which you say can be withdrawn (which I find dubious, but whatever, it's not relevant). But rather, the "Consent To Sex" I'm talking about is the action of having sex itself (and its completion).
The best evidence of final and non-withdrawable consent to an action is the completion of that action. You can't willingly drink water - with full knowledge of the consequences - then say "Oh, I withdraw my consent to drinking that water, I didn't want to", well too bad, you already drank water, the consequences of that has already happened. You can regret drinking water, you can refrain from drinking more water in the future, and you can certainly take other actions to nullify the consequences of drinking water, but you can never "withdraw consent" to the act of drinking water that you just did, your consent is simply irrelevant after you have already done the action.
Similarily, by engaging in and completing an impregnating sexual action with full knowledge of its consequences, you declare consent - final and non-withdrawable - to said consequences, namely pregnancy. You can then try to abort the baby, but that's not "withdrawing consent", that's just trying to clean up your mess by killing someone, which is trashy and immoral.
I don't know if the government should get involved, it just seems twisted to let someone who is utterly dependent on you (temporarily so!) die. Removing a kidney is permanent damage, so not like being pregnant or the thought experiment, as to plasma and bone marrow, it honestly makes sense that everyone should be signed up for those, like with jury duty.
>>I don't know if the government should get involved, it just seems twisted to let someone who is utterly dependent on you (temporarily so!) die.
Is there an alternative means besides the state for preventing a person who wanted to disconnect from doing so?
>>Removing a kidney is permanent damage, so not like being pregnant or the thought experiment
One in 3 US Births are by C-section - shouldn't cutting a child out of its mother's stomach count as leaving "permanent damage?"
"Risks to mothers include...
Increased risks during future pregnancies. Having a C-section increases the risk of complications in a later pregnancy and in other surgeries. The more C-sections, the higher the risks of placenta previa and a condition in which the placenta becomes attached to the wall of the uterus (placenta accreta).
A C-section also increases the risk of the uterus tearing along the scar line (uterine rupture) for women who attempt a vaginal delivery in a later pregnancy."
And while pregnancy is safe, it's not like it's risk-free. Is it still "twisted" to let someone die who is utterly dependent on you if there is a 32/100,000 chance that you'll die of it? Does your government, or do your peers, have the right to force you to take that gamble, even if it's admittedly small?
On the one hand, it's twisted to let someone who is utterly dependent on you (temporarily so!) die. But on the other hand, it's *also* twisted to punish someone for doing that.
There are very good reasons why, even though the law might be aimed at producing a just society, it doesn't state that every immoral act should be punished. Sometimes punishing an immoral act would be an even more immoral act
This is the best take. The extreme elements of the pro-choice movement that seek to cast abortion as having no moral dimension at all have always sat poorly with me. Of course there are moral stakes.
The problem is that there's no way of banning abortion that isn't *more* immoral.
If we ever develop functional artificial wombs we can just transplant an unwanted fetus into, then I think the question gets much more complex, but with the reproduction setup we have now, any "fix" society might apply is just always going to be way worse than the "problem."
If you willingly picked up the person and attached them to your body knowing full well they would need to remain attached for nine months, then I see no justification for cutting them off. The only exception seems to be rape.
I think that if you generalize the reasoning you are doing here, you seem to be committed to the idea that if you start attempting to save someone who had no other way of being saved, then it should be illegal for you to stop.
That's a good way to encourage people to never start attempting to save anyone.
Literally no one is harmed by someone who starts saving an otherwise-unsavable life and then stops - we assumed that this person was going to have no other shot at life, and so they end up no worse than they would have.
> you seem to be committed to the idea that if you start attempting to save someone who had no other way of being saved, then it should be illegal for you to stop
It's **Immoral** alright, law is another dimension.
>Literally no one is harmed by someone who starts saving an otherwise-unsavable life
An unsavable life that already existed before you tried to save it, it existed through no fault of your own, unlike the fetus of an abortion-seeking pseudo-mother, which exists solely due to that mother deciding to have sex and not thinking through the consequences. If the sole reason I'm drowning is that because you decided to drag me with you to the beach, it's bit rich to wash your hands of it and say that you're not responsible for saving me.
>That's a good way to encourage people to never start attempting to save anyone.
If the "Saving Someone" in the analogy corresponds to making a baby, this is wildly unrealistic. People will (for better and for worse, more worse than better) always continue to make babies no matter how crazy or unjust laws you make around it, there is nothing more crazy than China's one child policy, and it still haven't prevented the Chinese from making babies.
And that's ignoring it's an inaccurate confused analogy, making a baby is literally the opposite of "Saving Someone".
I was going off of the original text of the hypothetical-
"I remember a pretty sinister pro-choice argument about how if someone needed to be attached to your body for nine months to live, you would be entitled to cutting them off and letting them die."
There's nothing in that description that suggests you willingly picked up the person and attached them. The premise just seemed to be that the duty flowed from the need.
If we want to alter the hypo to add consent into the mix, then I think it gets a lot more complicated. Is consent to sex the same thing as consent to pregnancy? There's certainly overlap, but it's strange to think that if I'm using condoms, spermicidal lubricants, the birth control pill, the morning after pill, etc, or some combination thereof during consensual sex that I have "consented to pregnancy." I guess one could argue that I've "consented to the possibility of accidental pregnancy," but I've hardly taken the fetus and "willingly picked up the person and attached them to your body knowing full well they would need to remain attached for nine months."
I'd argue that consent to sex and consent to pregnancy are overlapping but distinct. To illustrate:
A pro-life person can argue that consent to pregnancy has a particular quality - namely that it cannot be withdrawn after being granted and before completion of the act. I think it's putting it lightly to say that an argument that consent to sex shared that same quality would yield a different response from the one observed in the context of pregnancy. So "consent to sex" and "consent to pregnancy" aren't the same thing - if they were, we wouldn't see such dramatically different responses to the question "can the consent be withdrawn?"
And if consent to sex is different from consent to pregnancy, then one can't say that a person, by mere act of consensual sex, has "knowingly and willingly attached" the person in need in the hypothetical. So we're just left with a clunky hypo that is a slam-dunk argument for why "abortion is okay in cases of rape" but which beyond that rapidly bogs down and doesn't really advance the conversation much.
The counterargument from Father Emil of Lake Wobegon's Our Lady Of Responsibility RC Church: "Well, if you didn't want to go to Minneapolis, why did you get on the bus?"
What the longtermist frame reveals is that the consent question does not matter much. Think instead of the entire future life of the fetus, regardless of how that fetus came about. Take that into account in your decision making, don't discount it.
While this story made me laugh, I think it also demonstrates how open letters politely imploring "please slow down on AI research, pleeeeeease" are going to do nothing.
Google, the multi-billion, multi-national company, is going to cut back on staplers amongst other things. At that level of penny-pinching, do you really believe they will voluntarily hobble themselves when it comes to a technology they and others are hoping will make them trillions in profit? Especially as that is where they seem to be pinning their hopes?
"In her recent email, Porat said the layoffs were “the hardest decisions we’ve had to make as a company.”
“This work is particularly vital because of our recent growth, the challenging economic environment, and our incredible investment opportunities to drive technology forward — particularly in AI,” Porat’s email said.
...Google employees have also noticed some more extreme cutbacks to office supplies in recent weeks. Staplers and tape are no longer being provided to print stations companywide as “part of a cost effectiveness initiative,” according to a separate, internal facilities directive viewed by CNBC.
“We have been asked to pull all tape/dispensers throughout the building,” a San Francisco facility directive stated. “If you need a stapler or tape, the receptionist desk has them to borrow.”
It would be ironic if these kinds of things were what brought about the AI Apocalypse: "Yeah well, we replaced all our employees with AI because that *massively* cut down on stapler usage. But we never anticipated that when we asked it to eliminate paperclips that it would react in such a way - and that's how World Wars III-VI happened and how we learned the hard way never to get between an AI and its paperclips".
That's treating the receptionist's time as worth very little. For that matter, needing to ask for staples treats everyone else's time as worth very little.
Isn't it mostly accepting that staples don't get used much in the company, and maybe trying to slightly disincentivize the generation of paper documents that need to be stapled? I'm not a Googler but the last time I used a stapler was before Covid-19.
Rationalists are so close to understanding that for-profit companies are already paper clip maximisers, far more dangerous than AI could be, and will literally grind your bones to dust if it allowed them to make more profit than the value of your labour in society.
I think this is a pretty good argument against any manufacturing-intensive AI doom scenario. Global capitalism has optimized supply chains and processes down to their most profitable minimum. The idea that the AI can get away with building its own physical agents, much less a planetful of grey goo, before anyone notices the hit to their bottom line is risable.
Air gaps are one thing; the thumb of the invisible hand is another.
GPT: Bedroom doors usually open inward for safety reasons. In case of an emergency, such as a fire, it is easier to push the door open and escape if the door opens inward. Additionally, it also provides more privacy and security for the person inside the room.
Me: By "inward" I mean "towards the bedroom". Your answer makes no sense. In a fire in my bedroom and I'm pushing on the door, I want it to open, not close.
GPT: I apologize for the confusion. Bedroom doors usually open inward, towards the bedroom, for privacy reasons. By opening inward, it allows for the door to be closed and locked from the inside, providing more privacy and security for the occupant of the room. Additionally, it also helps to reduce noise and disturbance from the outside.
Me: My house door opens outward and I can close and lock it from inside the house.
GPT: It is not uncommon for exterior doors to open outward for security reasons. By opening outward, it makes it more difficult for someone to force their way into the house. Additionally, it also helps to prevent the door from being blocked by snow or debris during inclement weather. As long as the door can be locked securely from the inside, it should not pose a safety or security risk.
Me: All of these reasons also apply to bedrooms.
GPT: [Asks for money.]
I certainly am not going to pay for more of these unthinking rote replies...
These rat-shit replies sure do suggest that GabbyPostTotaller is completely lacking in the kind of understanding that Melvin displays -- grasp of pros and cons of inward- vs. outward-opening doors, and the factors that affect which is more conventional and which more desirable in various situations. Fuckin GPT, man.
I can't blame GPT here since (1) the front door of my house opens *inwards* but (2) the front door of my workplace opens *outwards*.
So there isn't a universal rule about "all X open in Y direction". No wonder it's confused. Some doors open in, some doors open out. You wanna know why? Ask the freakin' builder!
If we're really looking for an answer, I guess the convention is that doors should open from the smaller space into the larger space? Bedrooms are usually larger than the corridor they open off, so the door opens into the bedroom so that the corridor doesn't get blocked by an open door. But the outdoors is always larger than the indoors, so exterior doors open outwards. On the odd occasion that a bedroom opens off a larger space, we usually still see bedroom doors open inwards because that's what we're accustomed to. In other cases not specified, the door tends to open into whatever place it will be least inconvenient.
>the door tends to open into whatever place it will be least inconvenient<
Good point. You are including the possibility that the door is left open which I did not consider. Clearly a bedroom door may be left fully or partially open.
The GPT is limited to those ideas that have been written about by humans. You are not. You can take your knowledge of human nature and apply it to the situation being discussed.
I did a similar thing. I took my knowledge that people smoke cigarettes and sometimes do this in bed. Ergo they can fall asleep with a lit cigarette and catch the bed on fire. The fire can smoulder for a while before the sleeper awakes to a bedroom filled with smoke.
What should they do then? Get down on the floor where there may be breathable air and crawl towards the door!
What happens when they reach the door? If the door opens inward, they must crawl backwards into the burning bedroom a sufficient distance to open the door.
No, not snow, but furniture. Furniture in the bedroom close to the door is a definite possibility. Furniture in the space outside the door is much less likely.
Pretty sure April 5 is OK, and Scott posted that he will not be a stickler about time zones, so sounds like 11:59 pm in your time zone is OK. You're writing one too, huh? I'm now confident I will finish mine in time, but my ass sure is starting to drag. How are you doing?
Far, far too many people conflating the issue of the possibility/likelihood of machine superintelligence soon/at all, and whether or not a machine superintelligence would lead to human extinction.
I was reading Machiavelli’s “Discourses on Livy” (c. 1531) yesterday and found a line that seems like it might have come from Scott:
“It is not without reason that the voice of a people is compared with that of a God, for it is obvious that popular opinion is wondrously effective in its predictions, to the extent that it seems to be able to foresee its own good and evil fortune through some occult power.”
Two other talented artists from the era were Ray Quigley of Popular Science fame (remembered in particular for illustrating the Gus Wilson and the Model Garage stories), and Al Wiseman (who illustrated the Dennis the Menace travel-themed comic books).
I always prefered Wiseman's meticulous detail to Ketcham's work (which became increasingly scribbly with time). As a child I found Wiseman's depictions of '50s American suburbia to be incredibly evocative.
Thank you so much! I have recollected many of the comic books those stories appeared in. (Dad made me sell the originals to a used book store when I was 13, and of course I've paid many times that since in buying replacements.)
Check out Al Wiseman's drawing of the flight deck of a Boeing 377 Stratocruiser:
I have a few questions about AI that I haven’t seen discussed. I’d appreciate it if some folks have links that I can read about and/or have some answers.
1) I am confused about how an AI can have agency to act in its own interest. The paper clip scenario sounds more like a dumb machine run amok. I don’t consider that agency in the way we talk about humans making decisions based on what they think is best. I can sort of, kind of see how that might be possible. But talk of a super genius AI deciding to eliminate humanity for its own survival seems like a big leap from a chatbot.
2) Chatbots and image generators have made big advances in a short amount of time. Is the technology that underpins those transferable to other domains like self driving cars? My naive view is that there is a very large difference between making a finished thing like a response or image and understanding the entire space of possibilities in the real world. Bounded vs effectively unbounded domains. I will be more worried about AI once it can cope with real life instead of a highly controlled, limited domain with a single output.
ChatGPT doesn't have agency. The risk is either a different type of AI, or someone adding something on top of ChatGPT -- something for interaction with the real world (there are already plugins that allow ChatGPT to run computer code, or send e-mails), and then a loop that will keep telling the ChatGPT "make a plan how to make my paperclip startup more profitable, send e-mails or run code using these plugins if necessary". This is simplified a lot, but keep going in this direction and you might get some form of agency.
The idea is not that an AI would decide on its own to eliminate humanity, but rather that it was programmed to solve some task, and eliminating humanity is just a side effect of the most efficient way to solve the task. Basically, think about the AI as a very smart psychopath, and if it wants to do X, it considers all possible ways to do X, unless explicitly told otherwise (and there is a way to eliminate humanity that you forgot to forbid explicitly).
The fact that ChatGPT doesn't have hands seems much less relevant to me than the fact that it can talk about all kinds of things. It is easier to add hands to a chatbot, than to make an existing robot with hands think about various different topics.
I believe that computer programs need some capacity for self-protection, though it can be partly outsourced to other programs or to the computer it's running on.
Programs exist in a hostile environment-- there are programmers and users who don't mean to do any harm, and there are also trolls and malware.
I don't know that self-protection shades over into agency, but it could.
1) If an entity has any type of goal and is sufficiently intelligent, it will develop instrumental goals to achieve its primary goals. If it's difficult sophisticated at setting, seeking, and adapting said goals that looks indistinguishable from agency.
2) I don't know. I think not, but I really don't know
It will only develop instrumental goals if it is capable of doing so. It cannot magically invent new ways to solve a problem outside of the bounds of its maze. An agent can overcome obstacles in a 2d grid in very creative ways to reach a goal, but it cannot decide to move diagonally or reprogram the grid or anything bizarre like that.
Indeed, if it isn't superintelligent it won't be superintelligent
Current ai chatbots are already "speaking" English and interacting with the real world in some small ways (especially over http), they aren't exactly a 2d maze solver
Thats my point, no one has proved "superintelligence" is a reasonable possibility for ML models beyond the realm of sci fi. So it is not clear why OP is so confident in suggesting instrumental goals will be easily created by future AIs when current ones cannot form associations between two distant goals, even with random search.
1. The possibility of machine superintelligence/its likelihood in the near future
2. The danger posed to humanity from a machien superintelligence
Also, 'acting in its own interests' is misleading. 'Optimally staisfying its utility function' is better. Because it's not that a machine becomes like a person and values its continued existence for its own sake, and that this makes it 'happy'. It has been programmed with goals and will try to achieve those goals. If it's goal is to destroy earth, then destroying itself in the process may be aligned with its goal of destroying earth, even though self-destruction is ostensibly not 'in its own interests' (though a smarter/more powerful system would likely spread beyond earth so as to exist beyond the attempted destruction of earth to ensure it has happened and develop contingencies).
Not great in my experience (3.5). The system likes to focus on a block of text that sounds or is shaped like an abstract, and paraphrase the hell out of it. Other transformer based language models seem similar, but I only have a small sample size.
Somebody needs to tell Yudkowsky not to wear Fedroas on podcasts...
Would also be helpful if he didn't do that weird demonic grinning face when giving answers, but I suspect that will be harder to change than headwear choices
I don't care about his style either, but do mind that he's not a good explainer. Whatever it is that good college lecturers have, he just lacks it. Can't say things in a crisp, clear, memorable way. I'm halfway through that interview and both Yudkowsky and Fridman are driving me crazy. I'm in the middle of a long patch of doldrums, which seems to have lasted about half an hour so far, in which Yudkowsky tries to get Fridman to say whether and to what extent he agrees with Yudkowsky: Does he think AI poses *any* big risks? If so, how likely does Fridman think they are to happen? Fridman absolutely refuses to answer, and spouts a bunch of bullshit about how he's into empathy. If the person he's interviewing believes (some ridiculous thing, I forget what), he sees it as his job to sort of mentally role-play what it is like to believe that, rather than pay any attention at all to his own beliefs about whether there is a lot of truth, a bit of truth or no goddam truth at all to the person's belief. A skillful interviewer would have drawn Yudkowsky out in a way that avoided the conversation turning into one about whether Fridman agrees with Yudkowsky. Since he was not skillful enough to avoid this turn in the conversation, I wish he would just come out and say what's clearly the case, which is that he thinks Yudkowsky's views are either fully wrong or quite likely to be wrong. But he doesn't have to guts to do it. OMG. It's really a terrible interview, and goes on for 3+ hours.
Few writers are good at improv. This should not cause us to think less of their work, though maybe it would be best to let writers focus on writing and find people with other skills to appear on podcasts.
It doesn't make me think less of his work. I'm vexed because I want to understand this stuff better, and I'm not getting much benefit in that area from the podcast. Also vexed because, thinking practically about the problem of persuading people to take AI risk seriously, it's just very unfortunate that THE key figure here is not a good explainer and also lacks charm and charisma. I admire him for putting so much of himself on the line. I'm sure he's aware of how he comes across. I hope being more publicly visible and publicly critiqued isn't painful for him.
I do feel a bit bad for him about the making faces. It's clear that that's happening because he's not what's currently called 'neurotypical'. I know because I have a similar problem, though not as bad, basically finding it hard to fully play the role of a 'normie' in terms of body language or social scripts. It sucks becuase it makes a lot of people immediately not take you seriously, and I suspect that's worse for him.
The fedora is on him, but then again, at the end of the day it really is just a damn hat.
The thing is there are many in the public face of AI research who are not "neurotypical" some of whom make fun of "normies." I don't think it makes people not take them seriously, rather, it causes a worry that these superimportant decisions ---whether and how fast to further develop AI in the interest of humanity at large --are made by people who might be brilliant but are not very well aligned socially or feel uncomfortable in their skin. Please this is not a criticism of aneurotypicals rather a call for broader participation. Tegmark says as much in his video on about Superintelligence on Twitter.
FFS Elizer, I can look at my cat and tell you there is ‘someone’ in there. I can look at the structure and scheme of GPT4 and tell you there is no one in there.
Unless you believe in souls/immaterial minds, or that there is something almost magical about neurons that allows them and them alone to produce consciousness, consciousness simply has to result from some physical process or processes, such as information processing, electrical oscillation frequency, or some other function performed by neurons.
If you don't believe in souls/the uniqueness of neurons, then other things simply have to be capable of generating consciousness, and except for biological systems which evolved with similar nervous systems to us and the ability to display e.g. emotion with their faces, you would have no way of knowing it.
Now, if you do think that you can't have consciousness without neurons, then you'll have to explain why and how.
This is a fundamentally subjective response, that convinces no one. There are people who look at GPT4 and will tell you someone is in there, and there are people that will happily slaughter cats.
I know. It is at least partially my own subjective take.
I still believe I'm right though. At this point feel free to call it something lame like spider-sense if you want. It is based on my own earlier formal study of AI and meditative contemplation of the nature of consciousness and agency.
If I ever do a long form explanation of my beliefs I'll link to it, but my gut (sorry, I know, subjective) tells me to give that effort lower priority than a lot of other things right now.
Edit
I do touch on one item in my response to Shion below,
I think he's wrong about most of the AI x-risk stuff, but to me I think there is a complexity here:
I agree with you, that there probably is 'someone' in your cat (for a certain definition of 'someone'), and I don't THINK that's the case for GPT4, but I'm less confident about that than i'd like to be.
I think that the only reason that humans know that humans are conscious is that we directly observe the experience of our own consciousness, and thereby know that other humans are conscious too, because they are strucutured almost identically to ourselves. I don't think we would be able to know nearly as easily if 'we' were aliens looking at humanity from the outside.
I was studying AI as an undergrad in the mid 80’s with a special interest in computational linguistics. At that time the language abilities of GPT were unimaginable. The problem as it was seen then is that language is a very complicated thing and if any serious progress was to be made in things like machine translation between natural languages, we would have to develop a system that had its own understanding of a model of the physical world.
Amazing progress has been made but current AI systems have *no* understanding of a model of the physical world. They use brute force statistical methods to do a clever job of token prediction and that is about it. They *know* nothing. They *want* nothing more than they have been programmed to ‘want’.
This is not intelligence, artificial or otherwise. It’s a clever *simulation* of such but that is all.
Forget about cats for a moment and look at the .3 mm in length trichogramma wasp. It has no more than 10,000 neurons but it it exhibits complex social behavior.
Show a way to code something up with 10k basic units that is driven to use clever ways to find proper sustenance and a mate in order to reproduce and I’ll rethink this whole idea.
We cannot have people like this be the face of our cause.
Sure, some will say that we shouldn't judge a book by its cover, etc, etc, but the reality is to succeed we will have to appeal to normies, and normies love their vague heuristics and cognitive distortions. We can no longer just hole up on obscure websites and applaud each other for achieving More Rational Ways of Thinking. If capital-R Rationality cannot adapt to this movement, it needs to fade away and let a new approach take over.
Hypothesis: Yud actually agrees with the whole Roko's Basilisk thing, and is playing his part to ensure that superintelligent AIs will be brought into this world by destroying the possibility of a sensible and respectable AI alignment movement.
Some tips on starting a blog: it's hard going if you are not starting with a following from elsewhere. And self-promotion can get your posts taken down in certain subs. But that said, you never know where this may lead. It really changes how you think.
Writing, no, but a lot more people are capable of being sympathetic to the AI-risk case and seeing some skinny fat neckbeard dude wearing a fedora inside has got to have a negative effect, even subconsciously. And I say that with no disrespect to Yudkowsky.
By his own admission he doesn't understand people very well, so he should probably hold off on making these sorts of assumptions.
What I know about humans is that if you're trying to influence human behaviour, it's best to do it from the position of being (or at least appearing to be) a high-status human. Failing that, try looking like a medium-status human. What you definitely don't want to look like is a low-status human, and the "indoor fedora" look makes you look like a low status human, the kind of human that most humans will instinctively recoil from lest they wind up in a conversation about Warhammer figurines.
Does anyone know where I can find weather forecasting datasets that include the outcome of the thing attempting to be forecast? For example, a dataset with probabilistic predictions of whether it will rain the next day in a certain area, and also the outcome of whether it ended up raining or not?
I'm interested in doing some analysis to see how accurate weather forecasts are, but it's been surprisingly difficult to find this type of data.
For model forecasts and the observations from stations in Europe, they are usually available if you starting digging various weather service institutions' websites, though observations and forecasts might behind different files / APIs. The main limitation is that people putting them up often assume you more or less know what you want and what to do with it.
One standard complication for weather forecasts - a forecast of 50% chance of rain sometimes means they are *certain* it will rain in *half* the region their forecast covers. You'd want to be able to properly account for that sort of thing here.
Where I am even a 90% chance of rain just means there's a 90% chance there will be at least a little rain at some point in the 24 hours. Not uncommon at all to see 90% chance of rain, and expected total 0.5". Really different from a forecast that it will rain 90% of the time tomorrow, and using percent to indicate the latter really seems like a more useful convention.
In theory you should be able to access the historical outputs of the major forecasting models (like this https://weather.us/model-charts). But i am only assuming the organizations store their past forecasts. I have no idea if you are able to access them.
And i you want like local news forecasts - no idea. Old newspapers will have some, though not in machine readable format probably.
Question mostly for famous people: how do you deal with requests for sentences on book covers? (ie "This is a great book about its topic, you should read it - Scott Alexander, blogger")
I've had some people ask me, I usually feel like it's a pretty strong imposition to ask me to read the whole book about a topic I might not care much about in the couple of weeks they give me until they need an answer, but I also feel uncomfortable giving a blurb about something I haven't read. Does anyone have a system they're happy with?
My favourite example of this is when Nassim Taleb provided a quote for Rolf Dobelli and then discovered that Dobelli had just ripped off all his ideas https://fooledbyrandomness.com/dobelli.htm
I get asked for this a lot. If it’s the publisher asking me, I ask for my free review copy of the book. If it actually looks interesting to me, I read and provide a blurb; if not I ghost or just tell the publisher that it’s too far from my interests and they should ask this other person or whatever. If it’s the author asking me, similar policy, except I feel worse about saying no.
Some people are very uptight with it. Some people are very loose. I haven't noticed much of a difference in reputations either way. Knowing your personality (insofar as I do) I'd set up a system with explicit lead time.
Something like, "If you want me to blurb or otherwise talk about your book please submit a final draft of the book with at least two months of lead time. I will try to read it and if I do finish it and think it's good I'll give a blurb. But there's an equally good chance I won't. This has nothing to do with your book's quality: sometimes it's just not my area of interest or ability."
Not famous, but I wrote a book that was blurbed by some some well-known intellectuals. I wrote two of the blurbs, and they were signed by the intellectuals. In those cases I had relationships with the two, and they're the ones who suggested the arrangement. That's obviously not the situation you're in. I have no idea whether or not you find this surprising. It is a common arrangement.
A semi-famous novelist once told me he got a blurb by a much more famous novelist on the back of one of his novels. The way it happened: the much more famous novelist sent him a letter saying: "Here's a blurb (something very positive) from me that you can put on your new novel. If you don't like it, write whatever you want and put my name on it. Just be sure to put my name on it." The semi-famous novelist's point was that when you see a name by another author on a book jacket, it is usually more about promoting the author of the blurb than it is the book. This explains why, for instance, Norman Mailer's name appears on the back of so many novels from the mid-50s through the mid-70s.
You clearly aren't in the self-promotion game, but perhaps you could tell the requester: "I won't have time to read the book soon, but what do you want me to write? Give me three examples and I'll tell you if I'm OK with your putting my name on one of them." This puts the ball back in their court, and then if you aren't comfortable with any of their suggestions without having read the book, say no. That wouldn't be rude on your part because you gave them a fair chance to
I am fortunate enough to have a similar problem. My heuristic is that if I’m not interested in the topic, I politely express my gratitude for the request and my regret for being unable to help. If I am interested, I read it the way I read anything that interested me when time is short - skim, pick one or two chapters that are extra interesting, and read these in-depth. Then I write a nice quote.
So far you can find my quote on exactly two books, and I stand behind my words for both (working on my third quote).
Hello: I just now happened on your review of the book Jaynes ought to have written, and am very glad to have. Now reading "On the Failure of Oracles" at U of Chicago's web site. thanks!
I’m looking for a small town to write a novel from. It has to be in the mountain west (CO/WY and surrounding area), ideally has a small town and good surrounding nature to go on hikes or runs. I’d rather avoid somewhere like Jackson Hole or Aspen because I expect these would be overrun with tourists, but unpopular places are by construction hard to find. Does anyone have any suggestions?
Comment from Scott Locklin's blog:
"To add to this: just learned that MIRI (then SIAI) was literally founded to *speed up* the coming Singularity, to prevent the gray goo apocalypse that Yudkowsky circa 2001 was convinced would kill us otherwise:
'On the nanotechnology side, we possess machines capable of producing arbitrary DNA sequences, and we know how to turn arbitrary DNA sequences into arbitrary proteins (6). We have machines – Atomic Force Probes – that can put single atoms anywhere we like, and which have recently [1999] been demonstrated to be capable of forming atomic bonds. Hundredth-nanometer precision positioning, atomic-scale tweezers… the news just keeps on piling up…. If we had a time machine, 100K of information from the future could specify a protein that built a device that would give us nanotechnology overnight….
If you project on a graph the minimum size of the materials we can manipulate, it reaches the atomic level – nanotechnology – in I forget how many years (the page vanished), but I think around 2035. This, of course, was before the time of the Scanning Tunnelling Microscope and “IBM” spelled out in xenon atoms. For that matter, we now have the artificial atom (“You can make any kind of artificial atom – long, thin atoms and big, round atoms.”), which has in a sense obsoleted merely molecular nanotechnology – the surest sign that nanotech is just around the corner. I believe Drexler is now giving the ballpark figure of 2013. My own guess would be no later than 2010…
Above all, I would really, really like the Singularity to arrive before nanotechnology, given the virtual certainty of deliberate misuse – misuse of a purely material (and thus, amoral) ultratechnology, one powerful enough to destroy the planet. We cannot just sit back and wait….'
And he made this incredible prediction:
'Our best guess for the timescale is that our final-stage AI will reach transhumanity sometime between 2005 and 2020, probably around 2008 or 2010.'"
I started experimenting with betting on prediction markets a few weeks ago and decided to make my first market recently and I had some questions on getting the most out of them:
1. Does anyone have any reading to recommend on best use cases? i.e. where does wisdom of the crowds "fail" or trend towards a wrong answer as N approaches infinity? Maybe better phrased would be: Where—if anywhere—does expert opinion reliably outperform the equilibrium point reached by wisdom of the crowd?
2. Are there any forums or communities where people who make markets can request extra engagement on matters they think are important to try and increase the predictive validity of the market? Alternatively are there ways of doing this with Manifold or Metacalculus by spending money or some other mechanism?
I am offering a mini-grant(s) to any individual [ideally who I could pay through a 501 c(3)] that would use the money for any one of the following:
1. Filling otherwise unmet need for high-IQ people ages 5 to 20 for whom the grant to you is likely to help him/her/them live up to their potential to make a difference in their current sphere of influence or the larger society. "Make a difference" can be in a progressive, conservative, or apolitical direction.
2. Encouraging discussion of pro-merit issues, e.g., the net negative of yet more redistribution of money and attention from people with greater potential to contribute to a better society to those less likely to. Like the other two foci, this must be used for an initiative that would otherwise go unfunded.
3. Taking a worthy step toward understanding the biological basis of reasoning or impulse control that would otherwise go unfunded.
Email me a brief proposal saying: 1 What you would do with the money, 2. What makes you a person likely to use the money well. 3. What would be the amount of the grant that would yield maximum benefit per dollar I'd give you. 4. Whether I could send the grant money through a 501c(3.) Send your proposal to me at mnemko@comcast.net
I can't help you but just want to say this is an incredible initiative and I hope it succeeds.
Thank you. I look forward to seeing proposals.
Any thought on Italy blocking chatGPT? It’s been the first country in the world to do so, citing privacy concerns on user data: https://www.garanteprivacy.it/home/docweb/-/docweb-display/docweb/9870847.
Other EU countries may follow suit.
Southern Europe is used to missing out on technological progress, so I'm sure they'll be fine and impoverished.
What worries me is that it's getting praised in western europe, which would be a disaster for our future.
So alignment will be trivially easy ?
And your evidence for this?
I wrote a thing about the thing that everyone is writing about lately. But this one is different, I swear..
https://kyleimes.substack.com/p/the-robots-are-coming-and-its-gonna
It's that time of year again: is Easter pagan?
So here's a video interview on the topic:
https://www.youtube.com/watch?v=xixdmHaajoc
Summary of the one-hour video:
"Nope, Easter is Christian, because that's when Jesus was crucified and rose from the dead. (Technical details on how the exact day was calculated.) If other religions celebrate the same day, it's because 'equinox' and 'end of winter' are universally interesting things. I did a lot of research on the goddess Iostra, and the spelling is slightly different. Anyway, she is probably made up. Also, maybe it's the other way round, and the pagans worldwide emulated the holiday of the local Christian communities. Easter eggs are a medieval invention, probably started as: 'the Lent is over, now you can eat eggs again'. Easter bunny was invented in 17 century as a rationalization for the eggs. People say a lot of crazy stuff on internet; if you are really curious about the history of religion, get a PhD in Theology."
Does anyone know of any tools that can transcribe teams calls?
And before you say it Can't use the teams transcribe function because my meetings can't be recorded cos GDPR, so need another way.
I'm not usre to follow. If you can't use transcribe because of GDPR (and not because it barely works), then why would another tool not be restricted in the same way?
The two main problems with the teams version is that when you want to fully transcribe the meeting you have to record names and record video too.
Whereas I'm hoping for another app that transcribes the meeting from audio alone without recording people's names, while still recognising distinct voices. This way, no 'personal' data (name and face) is gathered, but I can still record all the details from the call.
I didn't think the built-in function actually required "recording", as it can also work as real-time voice to text.
Hire stenographers? otter.ai
You have to record the sessions to get a full transcript. Otherwise you just get captions that will disappear
There are tools to record selected parts of the screen. Perhaps you can just record the video in a narrow box around the captions, with no audio or rest of the screen?
I thought there was a download transcription setting. 🤷♂️
Otter.ai?
Airgram?
The open source whisper.cpp does a decent job at creating a raw transcript but it needs editing to attribute text to specific speakers.
Won't the people crying out about AGI killing off humanity look dumb in ten years if it hasn't happened yet, and also doesn't look like it's about to happen?
Think about your dad's reaction if you play Russian Roulette _once_. Won't he look stupid that you survived (in 5/6 cases). Doesn't make it even one iota less stupid.
It's a different question from what OP asked. For a water pistol a reaction would be dumb even before firing, because there is no question of the outcome. Having to wait 10 years to see what happens implies a degree of uncertainty.
To take your metaphone back to the main topic, you're suggesting not only that we'll be ok, but that there is absolutely no possibility of harm from going bullish on strong AI, not matter what. This is, well, a hellof an argument to make, and it should take proportionally more evidence.
"Will I look like an idiot in 10 years If I spoke my mind" is a great, teenager-inspired, way to never do or say anything of value, and still look like an idiot, because not doing or saying anything is equivalent to doing and saying whatever the mainstream around you does and says, which is also not immune to being wrong and ridiculous.
Given enough time, every idea is dumb.
There is no workaround to being wrong, you have to be wrong in order to be right. I also think that AGI doomers are wrong, but I don't approach it from the ridiculous angle of "Right Side Of History"ism, I just say out loud why I think they are wrong.
They certainly will to witty-take oriented people who have a great dread of being out of fashion and appearing overly serious.
What the heck is point of this comment? Trying to shame people into thinking that AGI won't be catastrophic for humanity? If you genuinely think that AI will kill humanity, the risk of 'looking dumb' is a negligible cost.
"Won't the people saying AI isn't a risk to humanity feel like morons as they're dying at the hands of an AI?"
And what are you even saying? That they'll look dumb if and only if AGI exists in ten years time? Or if AGI doesn't exist by then they'll also look dumb?
If you have an argument to make against the AI risk case, make it, but 'you're gonna look so dumb when youre wrong' is just childish.
Even worse, even if they're right they still won't get to look smug because they'll be dead
Unless it was their crying that prevented it? (https://en.wikipedia.org/wiki/Self-defeating_prophecy)
Or the people clutching their pearls about wokeness destroying the country.
Kinda rich coming from the ideology that dresses itself in 59 Kg of pearls and spends long days and nights clutching them about **Checks Notes** the unavailability of Drag shows for kids.
All "pearl clutching" really means is "taking something more seriously than I think it should be taken." I guess it would be a witty and amusing way of making that point, if it weren't such an stale internet cliche at this point.
Ironic, considering wokeness IS pearl clutching, and it's the woke constantly freaking out about things.
"Yeah I don't think assaulting people and burning down people's small businesses across the country in the name of black nationalist outrage is a good thing"
"HAHA YOU PEARL CLUTCHER!"
I remember when climate change was going to have New York City underwater by 2020. Or was it 2010? Either way, when doomsday predictions don't happen on schedule the people who make them don't reconsider, they just push the date out any deny ever having made the old prediction.
Who was predicting NYC would be underwater by 2020?
These people? An article from 2011:
https://www.theguardian.com/environment/2011/nov/16/climate-change-report-new-york-city
"Irene-like storms of the future would put a third of New York City streets under water and flood many of the tunnels leading into Manhattan in under an hour because of climate change, a new state government report has warned.
Sea level rise due to climate change would leave lower Manhattan dangerously exposed to flood surges during major storms, the report, which looks at the impact of climate change across the entire state of New York, warns.
"The risks and the impacts are huge," said Art deGaetano, a climate scientist at Cornell University and lead author of the ClimAID study. "Clearly areas of the city that are currently inhabited will be uninhabitable with the rising of the sea."
Factor in storm surges, and the scenario becomes even more frightening, he said. "Subway tunnels get affected, airports - both LaGuardia and Kennedy sit right at sea level - and when you are talking about the lowest areas of the city you are talking about the business districts."
The report, commisioned by the New York State Energy Research and Development Authority, said the effects of sea level rise and changing weather patterns would be felt as early as the next decade."
Then they shifted that out in 2019 to happen "by 2100":
https://gothamist.com/news/new-climate-report-suggests-nyc-could-be-under-water-sooner-than-predicted
"Turns out sea levels may be on track to rise by more than double the unsettling figure climate scientists previously projected, and all within a century. A new study, released Monday, predicts a rise of 6.6 feet by 2100, if global temps warm by 9 degrees Fahrenheit. In that "worst-case scenario," according to CNN, hundreds of millions of people—including residents of New York City—would find themselves displaced as their homes sink underwater."
As stubydoo noted, the first article wasn't saying New York would be underwater by 2020, it was saying IF another Irene-level storm hit the flooding would be extra bad. That hasn't happened, and a quick Wikipedia look seemed to indicate that flooding from hurricane near misses has indeed been pretty severe.
That article from 2011 predicts 2.5 feet of rise by 2050. This article from the same outlet (https://www.theguardian.com/environment/commentisfree/2021/apr/13/sea-level-rise-climate-emergency-harold-wanless) from 2021 predicts 3 feet of rise by 2050. Seems pretty consistent to me.
That's jumping between two different aspects - general sea level and isolated flooding events. The 2011 report was about the flooding situation ***during major storms***. Major storms are random events that you can count on hitting somewhere, but the fates may send them to a different spot instead of yours. As far as I recall, New York City didn't get any major storms around the circa ~2020 timeframe. There was one that hit in 2012, and that one certainly did flood a sizable chunk of the city and many of the tunnels (they're still fixing the tunnels now)
I agree, some predictions being wrong certainly proves other predictions are wrong.
That's not how the psychology of doomsday cults works. Whether doom arrives or not is not all that crucial.
So what term do you use for people who saw, for instance, the course Germany was on a decade before it got the the Holocaust?
Rightly concerned, of course. The problem is that at the time many targeted minorities felt the concerns were overblown, and left only through repeated prompts. All too many actively chose to stay. It didn't help that many countries were actively hostile to migration in a way that made such alternatives look no better (according to letters that survive). Even the US route was only available to those with connections and money. I'm not sure invoking this tragedy in a Godwin's law kind of way is useful; it's not a trump card? You perhaps meant to make a more nuanced point but didn't have time to expand it at the time, but I would be interested to read it.
No, my point ain't all that nuanced. It's that sometimes people saying a terrible event is coming are right. I realize that those saying something like the Holocaust was coming were not strictly speaking prophets of doom, just predictors of terrible things happening to certain classes of people. But of course many people speaking up with great concern about rapid AI developments are also not predicting certain doom. Scott, for instance, places the probability of FoomDoom at 33% if we do not take steps quite soon to stop AI development and make progress on AI alignment. Zvi's prediction is about 50% I believe. Yudkowsky is the only person I'm aware of saying it's too late to intervene and FoomDoom is now certain. Does that make those whom he's convinced cult members? Seems like being a cult member requires more than being convinced. You'd have to be preoccupied with the cult's leader and obedient to them, and also willing to cut your ties with those who do not believe as you do. I expect there are some Yudkowsky fans who think that way, but I don't have any reason to think it's most of them. There are probably people whom Yudknowsky's writings have convinced who had no contact with him or others who believe him to be right. They're just off living their lives, trying to come to terms with this awful truth they've been convinced of. Or take me. I am convinced I do not understand AI well enough to make a valid prediction of how likely FoomDoom is. So I have decided to go by Scott and Zvi's read, because they are smart people in whom I feel a reasonable degree of trust. But I'm sure I'm not a Scott cultist. In fact my last post about how to reduce the risk really irritated him, and far from bowing to the master, I just argued back.
How about you stop calling it a 'doomsday cult'?
I'm referring to the psychology of doomsday cults in response to someone who thinks "looking silly" is a consideration, to try to point out that this is the wrong framing. As far as I can tell there are some aspects of that psychology that apply to the current discourse about extinction risk.
However I should probably have expanded my comment. I am not trying to dismiss concerns about extinction risk from AI, because I share some of the worry, and extinction is in my personal Overton window. I believe that AI is creating great upheaval and will likely have major destructive effects, even if less directly than an extreme FOOMer position predicts.
Actual doomsday cults tend not to say things like "We all might die, and there's nothing anyone can do to save themselves; there's no grand meaning to the apocalypse, or anything good about it in particular; no reason to be excited, no deep mystery, no spaceship coming to save you, no battle between right and wrong, almost all potential ways out haven't panned out, and the remaining ways of increasing chances of survival are deeply ideologically repulsive to the group's (mostly technophile) members; there will never even be so much as a moment to brag about being correct before death, the future is likely just sad, you should try to live well in what time left you have". The psychology doesn't fit the model you're suggesting here.
That sounds like a caricature of 20th century existentialism, was that your intent?
Has anyone made a practical design for an arcology meant for placement in very hot or cold climates?
Does The Line in Saudi Arabia count?
https://www.neom.com/en-us/regions/theline
I don't know if it's actually been designed for a hot climate.
Is utilitarianism itself an X-risk? There's a pretty outlandish scenario described here: https://betterwithout.ai/rollerskating-transsexual-wombats that culminates in something not so outlandish:
> [Mocenigo] uses cutting-edge AI to calculate the expected utility of long-term human survival versus projected suffering over the next few years. As he suspected, its answer is negative. Mocenigo opens his lab fridge and snorts a vial of Anthrax Leprosy Mu, a hybrid pathogen he’s developed. It has an R0 of 14,000 and is uniformly fatal after a month-long symptom-free incubation period. He heads out of the lab to the football stadium nearby that has been converted into a refugee center.
If people start thinking utilitarianism is true, well, someone could end up with their calculus telling them human extinction is the best outcome, as happened there. Making it one of the most dangerous philosophies developed, since unlike other philosophies this one cloaks itself in math, making it have a very powerful allure to people with a certain psychology. The sort of person who could figure out how to actually cause human extinction.
Might be interesting to draw up a catalog of philosophies and ideologies that could justify human extinction, so some heavy duty philosophical work could go into refuting them all.
I think once you take into account the possibilities for causing human extinction non-deliberately you'll find such a catalogue much less useful. After all, accurately predicting how your actions may contribute to humanity's extinction is probably about as hard as accurately making utilitarian calculations
Well, philosophy cannot do all things, but refuting dangerous philosophies seems like a worthwhile task for it. I don't think a super high level of certainty is needed to proclaim "this action does not contribute to humanity's extinction". Was beating the Nazis something that accelerated human extinction? "Oh, a capitalist economy accelerates technological development, which increases all sorts of X-risks, therefore, it would have been better that the Allies lost. And if the Allies lost, maybe there would have been no nukes." Some level of risk has to be accepted in every decision.
For what it's worth, I do think the birth of science may have been a colossal mistake. It increased our power without a concomitant increase in the wisdom to handle that power. It arrived too early basically.
I just don't feel there's much grounding for the claim that one philosophical system is more likely to lead to extinction than another. Something akin to that example for utilitarianism can be concocted for basically every one, and I see little hope of quantifying which are most likely
It wouldn't be based on quantification. Every philosophical view can be used to argue for extinction? You see this in virtue ethics? What is the path there? In communism? In capitalism?
The easy paths involve situations in which things *like* nuclear exchanges are deemed lesser evils. But ultimately everything that utilitarianism does quantitatively can be done qualitatively. Actually writing the full path for a specific philosophy is left as an exercise for the reader - sorry, but I don't find it wise to explicitly lay that out right now, and also I'm tired
Or someone could think it's worthwhile starting WWIII so long as it takes out the AIs.
And that's automatically wrong, obviously.
It was worthwhile to start the Civil War to end slavery. This might be a similar situation. The global south will survive total nuclear war anyway. But I think this can be handled without WWIII.
"It was worthwhile to start the Civil War to end slavery."
Says who?
And 'end slavery' should really be 'end slavery sooner'.
Even if we are willing to discount the complexities of the civil war for the sake of the argument, I do not see the analogy with today’s situation. Slavery was not, per se, an advantage in defense, but AI is very likely to be. So war (or the mere threat of war) may increase the pace of AI adoption, leading to an arms race scenario. This is probably the worst possible outcome.
The analogy is that slavery was a moral error, and advancing AI capabilities without solving alignment is also a moral error. For what it's worth, I think we are pretty far from the point AI is necessary to win a war, and an arms race was going to kick off on its own anyway once AI is seen to provide a decisive advantage. The CCP knows USG wants it gone: AI doesn't change that.
"The analogy is that slavery was a moral error, and advancing AI capabilities without solving alignment is also a moral error. "
I presume that enslaving irish immigrants to go kill a bunch of people in the south was an act of pristene moral clarity though, right?
There is a view that wars start because of a grievance of some kind plus the inability to reach a peaceful agreement, typically due to incomplete information. So fighting is in part a learning process. It is hard for me to believe that a major country would start a war over something so silly as a ‘moral error’, whatever that is. Perhaps they would use that as a cover story later on, to justify their choice to their own public opinion. Now if the ‘moral error’ is also a potentially dangerous weapon, I would expect a preemptive attack, with later public discourse spinning it as the right thing to do for the good of mankind or something. But if the AI is already in control on both sides and it has a way of figuring out whether it will win or lose without fighting, it might even choose not to fight. Unless it is aligned, ironically.
That's a lousy argument on its own. There are many moral errors in the world that aren't worth fighting a war over.
Communism is a moral error, is it worth invading China? Probably not, but I'd at least want to see a cost-benefit analysis before we try. Iraqi Baathism was a moral error and the cost of that one turns out to have been dubiously worthwhile.
As for slavery, if only the US had followed the example of every other freaking country in the Americas and peacefully and gradually outlawed slavery (while paying adequate compensation to the slaveowners so they weren't too upset) I think that would have been a vastly better alternative.
And it probably would have ended up being cheaper too, all things considered.
There is a comment thread here about why Stormy Daniels has not/is not being charged with blackmail related to the events that have gotten Donald Trump indicted. I have new info to offer and for ease of reading will summarize here.
The question basically is, "isn't she guilty of illegal blackmail/extortion?"
Some of us responded to note that hush money payments aren't illegal under federal law. U.S. law (see 18 U.S.C. § 873) includes that the thing being threatened for exposure has to be something that was illegal. Not simply embarrassing or politically inconvenient but, illegal. Consensual banging with a porn star is not illegal.
Trump though is being indicted by the State of New York, and some folks pointed out that blackmail under most state laws does not require that the thing being threatened for exposure be an illegal act. State laws against blackmail cover the instilling of fear more generally, including not just threatening to do someone harm and/or exposing a crime but also threatening to "expose a secret" which would induce fear in the person being blackmailed (that's a quote from the New York statute).
So, then, why isn't Stormy Daniels potentially guilty of illegal extortion? That question turns out to have been posed many times since 2018 in forums like Reddit and Quora and etc, with a variety of attorneys/law professors/former prosecutors/etc jumping in to respond to it. Their consensus answer is that there isn't any allegation that Stormy Daniels has attempted any extortion as defined in any state laws.
Daniels didn't approach Trump or any Trump representatives to demand money for silence. Rather, she was starting to talk in public about having had sex with Trump and was then approached and (she alleges) threatened with harm if she didn't sign an NDA drafted by Trump's lawyers and accept the payment in exchange. She signed the NDA, received the money, and then Trump failed to sign the document. Later she sued in civil court to invalidate the NDA because of his failure to sign it; she did not in that lawsuit seek any damages or other new payments but simply asked the court to agree that the NDA was not in force.
So, quoting here many lawyers who've posted responses online on this topic during the past four years, "none of the elements of legal blackmail (on Daniels' part) exist."
I’m not sure which will be more entertaining, MSNBC’s or Fox News coverage of Trump’s booking. Maybe I’ll record one of them so I can compare and contrast. As usual, I’m kidding. I won’t be watching either. I saw the OJ low speed chase by accident in an after work bar gathering. That’s enough reality TV for one lifetime.
Doesn't adopting the longtermist frame imply being pro-life? Been reading a bit of The Precipice, and they sure harp a lot on the value of unborn generations. If these generations are worth a lot, then isn't the future life of the fetus also worth much?
Maybe you can counter, what about the future life of the gametes? But gametes do not turn into people all on their own, like a fetus. Once fertilization occurs, you have something that will almost certainly become a person, should no one intervene. And if future life has value...
This only works if you think that people are morally obligated to maximize total utility and don't put any value on human freedom.
This would also imply forcing people to breed. If we're going to go down that road, better to force everyone to do it instead of placing all the burden on the dumber half of society.
I'm not even sure it works if you assume that! I put no final weight on human freedom - but I think that human freedom is an incredibly valuable thing for maximizing total utility (because usually giving a person freedom gives them the ability to do whatever maximizes their own utility, and they will usually do so better than someone else deciding for them).
While forcing people to breed might be a "greedy" solution towards maximizing the number of happy lives, it does a much better job at maximizing *number* of lives (short term, at least) and a much worse job at making them happy (and over the long term, those lives may not end up very well if they are mostly forced rather than chosen, and may end up with fewer people in the second or third generation if the first generation goes particularly badly).
<quote>you have something that will almost certainly become a person, should no one intervene.</quote>
I dont know.. seems like a lot of intervening is done to ensure it becomes a person
Not necessarily - a longtermist may believe that future generations have value, but that doesn't mandate that value to always trump value/rights/interests in the present.
Its a value/value balance, not a blind yielding of the present to the future. Otherwise you'd end up with crazy longtermists calling for some kind of pregnancy autocracy where we throw human rights out the window and mandate annual pregnancy for all women in order to maximize the size of future generations at all costs.
They are definitely arguing we should be heavily valuing the future. That's not the same as having the future overthrow the present, but then, neither is having a child and giving it up for adoption an overthrow of the present by the future. It would just be including the future in your decision making, instead of thinking only of yourself.
People who decide to have an abortion are usually thinking *primarily* about the future in their decision.
And (just to further complicate the future dimension), most women who get abortions already have at least one child.
"Six in 10 women who have abortions are already mothers, and half of them have two or more children, according to 2019 data from the Centers for Disease Control and Prevention"
https://www.nytimes.com/interactive/2021/12/14/upshot/who-gets-abortions-in-america.html
So it's not just the future of the pregnant woman being considered, but their limited resources for providing and the impact another mouth would have on the futures of the kid(s) that they already have.
The longtermist frame implies doing things that will increase the general population growth rate, provided that they don't correspondingly decrease the economic growth rate. I don't believe that legal availability of abortion does that much to change the population growth rate (though if someone has good data showing that it does, that would be interesting to know). My understanding is that the availability of abortion makes it safer, and ends up with nearly as many babies born, just that more of them are at times in the parents' life when they are happy to support them.
What I read in The Precipice is that we should be valuing a lot what future generations will accomplish. That's the impulse behind it, not specific implementation details like a balance of the population growth rate and the economic growth rate. Similarly with abortion, if you start valuing what the unborn fetus will accomplish in the future, suddenly the whole thing starts looking very different.
And the only choices are not abortion and raising the kid yourself: newborns given up for adoption are snapped up fast. So it's not really an out to throw out a fetus' entire future life just because you don't want to be inconvenienced for 9 months.
Abortion only makes sense if you think the future life of the fetus has no or insignificant value. And the longtermist frame is all about valuing the future.
I think you're missing the very specific *quantitative* core of longtermism. Nearly *all* ethical views say that we should care about the future. The distinctive feature of longtermism is that it says there's so many *more* people in the future that we should care about the future *more* than the present - enough so, that decisions about the present might generally be better if they are made in light of their effects on the future rather than their effects on the present.
Given this perspective, it's a mistake to evaluate an abortion just by looking at the effects on the one fetus that is denied a chance at life - you instead need to evaluate the abortion by looking at the effects on the huge number of people in the future. It's all about population growth rate and economic growth rate, and not about missing the forest for looking at one tree.
If you value what the unborn fetus might accomplish in the future, but *also* value what the pregnant person might accomplish in the future if they weren't pregnant, and *also* value what might be accomplished by a future fetus that this person might choose to carry to term if not this one, then it all becomes pretty complicated.
But it becomes a lot more simple if you also look at the impact on the future of laws criminalizing abortion, apart from their effect on the abortion itself.
In lieu of a long-winded answer, you might find it interesting to ask yourself the contrapositive (does not being pro-life (or being pro-choice) entail that future life has no value?).
A pro-choicer is definitely saying it has very little value, certainly much less than the longtermists are saying. Perhaps this is not the same as literally no value, but it is a distinction without a difference, since no value or little value, the outcome is the same.
I remember a pretty sinister pro-choice argument about how if someone needed to be attached to your body for nine months to live, you would be entitled to cutting them off and letting them die. In that one, we're not even talking of future life any more, but of real, present life, and even that is being devalued. But I'm not sure how many pro-choicers actually endorse that argument.
Does this mean abortion should be banned? Maybe not federally, but I'm ok with letting states do it. I do feel abortion is ignoble in nearly all cases. There would be no abortion if we were all we were meant to be. But this has no bearing on whether longtermism implies pro-life, just decided to clarify my general position on abortion.
Maybe one can be a pro-choice longtermist if one says individual future lives have little value, it's just that there are potentially so many of those that it adds up to a hefty sum, but that's an utilitarian argument, and utilitarianism may itself be an X-risk (https://betterwithout.ai/rollerskating-transsexual-wombats), so I don't think I buy it.
I think the point of the "violinist arguments" you are alluding to is precisely the opposite: they aim to demonstrate that even if the fetus' life had extremely high value (the same value as an adult person's life), there should still be a right to abortion. Whether that argument succeeds or not, I think there are quite a few pro-choicers who believe that fetal life has significant value and even that many abortions are immoral, but that there should still be a right to abortion (rather uncontroversially people enjoy many rights that allow them to do immoral things).
I am not convinced that this is not completely orthogonal to longtermism. Longtermism makes comparisons and value-judgments over aggregates such as generations of (potential) morally valuable beings. To do this neither does one have to commit to a point in the biography of these beings at which they gain moral status nor do comparisons between aggregates necessarily imply anything about comparisons between individuals as in the abortion case.
You have to commit to these beings having value even before they have a biography, that's the longtermist frame. And each individual being has to matter for this to make sense: multiplication by 0 yields 0.
> rather uncontroversially people enjoy many rights that allow them to do immoral things
Yeah, but we don't live in an anything goes society because that's insanity. In abortion, what's in the balance most of the time is 9 months of inconvenience versus an entire lifespan. It's obvious where the scales have to tilt. So I'm coming round to it should be banned, with exceptions if the pregnancy is life threatening.
Longtermism is not a moral philosophy in its own right but a moral philosophical thesis that roughly states "future generations matter morally and they matter a lot because they could consist of so many morally relevant beings". This is only a thesis about aggregates and does not have to make a commitment about the underlying moral philosophical framework as long as it gives some consideration at all to quantity.
Questions of the morality (or legality) of individual acts such as abortion depend on your moral philosophy not on whether you assign moral weight to future generations. So if e.g. you consider the right to abortion a moral positive then it will follow that in your version of longtermism future societies should have a right to abortion. But you might just as well have the contrary position.
>>I remember a pretty sinister pro-choice argument about how if someone needed to be attached to your body for nine months to live, you would be entitled to cutting them off and letting them die.
Why do you think the argument is "sinister?" It seems to me to be self-evidently just. I mean, the only alternative would be a government empowered to force you to maintain the connection against your will, which strikes me as a cure many times worse than the disease. Wouldn't such a government, just by way of example, also be able to forcibly remove one of your kidneys, or some of your plasma or bone marrow, in order to make you donate them to a person in need of a transplant?
The "violinist argument" is a transparently pathetic twisting of the object level situation, in actual real life (almost) nobody kidnaps you and puts a baby inside you by force, rape constitutes less than 0.2% or so of reasons for abortion in the US according to a 1998 meta study[1], it's not even listed under its own category, but bundled into an "other" category.
What actually happens is that the abortion-seeking woman have sex, completely willingly, then treats abortion as a convenient contraceptive to clean up her mess.
By consenting to sex, you consent to its consequences. Each and every single one of them. Nobody kidnapped you to put the violinist inside you, you went out of your way to invite them in. The violinist didn't even exist before you invited them, your invitation literally created them out of thin air and made them dependent on you for just 9 months, which you then deem too much of a cost to your potential wage slavery - oh sorry, "career" - opportunities and decide to kill them for an extra promotion.
"IS IT REALLY FAIR TO BEAR THE CONSEQUENCES OF MY OWN ACTIONS ?!!!", apparently yes, I would say so, lots would agree with me too. The violinist argument is an unconvincing tortured metaphor to try to trick a reader into saying no. Who on Earth ever changed their mind after hearing it ?
[1] https://upload.wikimedia.org/wikipedia/commons/d/dd/AGIAbortionReasonsBarChart.png
>>By consenting to sex, you consent to its consequences.
I think my note elsewhere in this chaining is relevant so just gonna ctrl+c rather than rewriting the wheel.
>I'd argue that consent to sex and consent to pregnancy are overlapping but distinct. To illustrate:
>A pro-life person can argue that consent to pregnancy has a particular quality - namely that it cannot be withdrawn after being granted and before completion of the act. I think it's putting it lightly to say that an argument that consent to sex shared that same quality would yield a different response from the one observed in the context of pregnancy. So "consent to sex" and "consent to pregnancy" aren't the same thing - if they were, we wouldn't see such dramatically different responses to the question "can the consent be withdrawn?"
And if consent to sex =/= consent to pregnancy, then this whole line of argument dissolves.
This is sophistery, the "Consent To Sex" I'm talking about in my comment isn't the literal words "I consent to sex", which you say can be withdrawn (which I find dubious, but whatever, it's not relevant). But rather, the "Consent To Sex" I'm talking about is the action of having sex itself (and its completion).
The best evidence of final and non-withdrawable consent to an action is the completion of that action. You can't willingly drink water - with full knowledge of the consequences - then say "Oh, I withdraw my consent to drinking that water, I didn't want to", well too bad, you already drank water, the consequences of that has already happened. You can regret drinking water, you can refrain from drinking more water in the future, and you can certainly take other actions to nullify the consequences of drinking water, but you can never "withdraw consent" to the act of drinking water that you just did, your consent is simply irrelevant after you have already done the action.
Similarily, by engaging in and completing an impregnating sexual action with full knowledge of its consequences, you declare consent - final and non-withdrawable - to said consequences, namely pregnancy. You can then try to abort the baby, but that's not "withdrawing consent", that's just trying to clean up your mess by killing someone, which is trashy and immoral.
I don't know if the government should get involved, it just seems twisted to let someone who is utterly dependent on you (temporarily so!) die. Removing a kidney is permanent damage, so not like being pregnant or the thought experiment, as to plasma and bone marrow, it honestly makes sense that everyone should be signed up for those, like with jury duty.
Can you do it to a plant? Cuz I'm about to throw out our scaggy waiting room plant, which nobody but me waters.
>>I don't know if the government should get involved, it just seems twisted to let someone who is utterly dependent on you (temporarily so!) die.
Is there an alternative means besides the state for preventing a person who wanted to disconnect from doing so?
>>Removing a kidney is permanent damage, so not like being pregnant or the thought experiment
One in 3 US Births are by C-section - shouldn't cutting a child out of its mother's stomach count as leaving "permanent damage?"
"Risks to mothers include...
Increased risks during future pregnancies. Having a C-section increases the risk of complications in a later pregnancy and in other surgeries. The more C-sections, the higher the risks of placenta previa and a condition in which the placenta becomes attached to the wall of the uterus (placenta accreta).
A C-section also increases the risk of the uterus tearing along the scar line (uterine rupture) for women who attempt a vaginal delivery in a later pregnancy."
https://www.mayoclinic.org/tests-procedures/c-section/about/pac-20393655
And while pregnancy is safe, it's not like it's risk-free. Is it still "twisted" to let someone die who is utterly dependent on you if there is a 32/100,000 chance that you'll die of it? Does your government, or do your peers, have the right to force you to take that gamble, even if it's admittedly small?
https://www.cdc.gov/nchs/data/hestat/maternal-mortality/2021/maternal-mortality-rates-2021.htm
On the one hand, it's twisted to let someone who is utterly dependent on you (temporarily so!) die. But on the other hand, it's *also* twisted to punish someone for doing that.
There are very good reasons why, even though the law might be aimed at producing a just society, it doesn't state that every immoral act should be punished. Sometimes punishing an immoral act would be an even more immoral act
This is the best take. The extreme elements of the pro-choice movement that seek to cast abortion as having no moral dimension at all have always sat poorly with me. Of course there are moral stakes.
The problem is that there's no way of banning abortion that isn't *more* immoral.
If we ever develop functional artificial wombs we can just transplant an unwanted fetus into, then I think the question gets much more complex, but with the reproduction setup we have now, any "fix" society might apply is just always going to be way worse than the "problem."
If you willingly picked up the person and attached them to your body knowing full well they would need to remain attached for nine months, then I see no justification for cutting them off. The only exception seems to be rape.
I think that if you generalize the reasoning you are doing here, you seem to be committed to the idea that if you start attempting to save someone who had no other way of being saved, then it should be illegal for you to stop.
That's a good way to encourage people to never start attempting to save anyone.
Literally no one is harmed by someone who starts saving an otherwise-unsavable life and then stops - we assumed that this person was going to have no other shot at life, and so they end up no worse than they would have.
> you seem to be committed to the idea that if you start attempting to save someone who had no other way of being saved, then it should be illegal for you to stop
It's **Immoral** alright, law is another dimension.
>Literally no one is harmed by someone who starts saving an otherwise-unsavable life
An unsavable life that already existed before you tried to save it, it existed through no fault of your own, unlike the fetus of an abortion-seeking pseudo-mother, which exists solely due to that mother deciding to have sex and not thinking through the consequences. If the sole reason I'm drowning is that because you decided to drag me with you to the beach, it's bit rich to wash your hands of it and say that you're not responsible for saving me.
>That's a good way to encourage people to never start attempting to save anyone.
If the "Saving Someone" in the analogy corresponds to making a baby, this is wildly unrealistic. People will (for better and for worse, more worse than better) always continue to make babies no matter how crazy or unjust laws you make around it, there is nothing more crazy than China's one child policy, and it still haven't prevented the Chinese from making babies.
And that's ignoring it's an inaccurate confused analogy, making a baby is literally the opposite of "Saving Someone".
I was going off of the original text of the hypothetical-
"I remember a pretty sinister pro-choice argument about how if someone needed to be attached to your body for nine months to live, you would be entitled to cutting them off and letting them die."
There's nothing in that description that suggests you willingly picked up the person and attached them. The premise just seemed to be that the duty flowed from the need.
If we want to alter the hypo to add consent into the mix, then I think it gets a lot more complicated. Is consent to sex the same thing as consent to pregnancy? There's certainly overlap, but it's strange to think that if I'm using condoms, spermicidal lubricants, the birth control pill, the morning after pill, etc, or some combination thereof during consensual sex that I have "consented to pregnancy." I guess one could argue that I've "consented to the possibility of accidental pregnancy," but I've hardly taken the fetus and "willingly picked up the person and attached them to your body knowing full well they would need to remain attached for nine months."
I'd argue that consent to sex and consent to pregnancy are overlapping but distinct. To illustrate:
A pro-life person can argue that consent to pregnancy has a particular quality - namely that it cannot be withdrawn after being granted and before completion of the act. I think it's putting it lightly to say that an argument that consent to sex shared that same quality would yield a different response from the one observed in the context of pregnancy. So "consent to sex" and "consent to pregnancy" aren't the same thing - if they were, we wouldn't see such dramatically different responses to the question "can the consent be withdrawn?"
And if consent to sex is different from consent to pregnancy, then one can't say that a person, by mere act of consensual sex, has "knowingly and willingly attached" the person in need in the hypothetical. So we're just left with a clunky hypo that is a slam-dunk argument for why "abortion is okay in cases of rape" but which beyond that rapidly bogs down and doesn't really advance the conversation much.
The counterargument from Father Emil of Lake Wobegon's Our Lady Of Responsibility RC Church: "Well, if you didn't want to go to Minneapolis, why did you get on the bus?"
What the longtermist frame reveals is that the consent question does not matter much. Think instead of the entire future life of the fetus, regardless of how that fetus came about. Take that into account in your decision making, don't discount it.
While this story made me laugh, I think it also demonstrates how open letters politely imploring "please slow down on AI research, pleeeeeease" are going to do nothing.
Google, the multi-billion, multi-national company, is going to cut back on staplers amongst other things. At that level of penny-pinching, do you really believe they will voluntarily hobble themselves when it comes to a technology they and others are hoping will make them trillions in profit? Especially as that is where they seem to be pinning their hopes?
https://www.cnbc.com/2023/04/03/google-to-cut-down-on-employee-laptops-services-and-staplers-to-save.html
"In her recent email, Porat said the layoffs were “the hardest decisions we’ve had to make as a company.”
“This work is particularly vital because of our recent growth, the challenging economic environment, and our incredible investment opportunities to drive technology forward — particularly in AI,” Porat’s email said.
...Google employees have also noticed some more extreme cutbacks to office supplies in recent weeks. Staplers and tape are no longer being provided to print stations companywide as “part of a cost effectiveness initiative,” according to a separate, internal facilities directive viewed by CNBC.
“We have been asked to pull all tape/dispensers throughout the building,” a San Francisco facility directive stated. “If you need a stapler or tape, the receptionist desk has them to borrow.”
It would be ironic if these kinds of things were what brought about the AI Apocalypse: "Yeah well, we replaced all our employees with AI because that *massively* cut down on stapler usage. But we never anticipated that when we asked it to eliminate paperclips that it would react in such a way - and that's how World Wars III-VI happened and how we learned the hard way never to get between an AI and its paperclips".
Staplers? What they really should be cutting back on are paperclips....
"The time has come" the walrus said
"To talk of many things
Of sleaze and scary half-true facts
And Babbages and Bings."
Oh yeah and also
"But wait a bit," the Altmans cried
"Before we have out chat.
For some of us are scared of death
And all of us love Chat."
"And juries?" asked the Carpenter
They hated him for that.
>As a part of the January U.S. layoffs, the company let go of more than two dozen on-site massage therapists.
How many on-site massage therapists did they have?!
...I wonder if you can still get a massage from the reception desk.
When I worked there that was about the right order of magnitude. It was a very popular perk. Google was a very tense place to work.
Having worked reception in my time, I'd say if you ask, you'll get one with the stapler 😁
The staple-remover would be good too. At our house we call it Fang.
That's treating the receptionist's time as worth very little. For that matter, needing to ask for staples treats everyone else's time as worth very little.
Isn't it mostly accepting that staples don't get used much in the company, and maybe trying to slightly disincentivize the generation of paper documents that need to be stapled? I'm not a Googler but the last time I used a stapler was before Covid-19.
Removing staplers and tape sounds suspiciously like paper clip maximization to me.
Rationalists are so close to understanding that for-profit companies are already paper clip maximisers, far more dangerous than AI could be, and will literally grind your bones to dust if it allowed them to make more profit than the value of your labour in society.
I think this is a pretty good argument against any manufacturing-intensive AI doom scenario. Global capitalism has optimized supply chains and processes down to their most profitable minimum. The idea that the AI can get away with building its own physical agents, much less a planetful of grey goo, before anyone notices the hit to their bottom line is risable.
Air gaps are one thing; the thumb of the invisible hand is another.
Me: Why do bedroom doors open inward?
GPT: Bedroom doors usually open inward for safety reasons. In case of an emergency, such as a fire, it is easier to push the door open and escape if the door opens inward. Additionally, it also provides more privacy and security for the person inside the room.
Me: By "inward" I mean "towards the bedroom". Your answer makes no sense. In a fire in my bedroom and I'm pushing on the door, I want it to open, not close.
GPT: I apologize for the confusion. Bedroom doors usually open inward, towards the bedroom, for privacy reasons. By opening inward, it allows for the door to be closed and locked from the inside, providing more privacy and security for the occupant of the room. Additionally, it also helps to reduce noise and disturbance from the outside.
Me: My house door opens outward and I can close and lock it from inside the house.
GPT: It is not uncommon for exterior doors to open outward for security reasons. By opening outward, it makes it more difficult for someone to force their way into the house. Additionally, it also helps to prevent the door from being blocked by snow or debris during inclement weather. As long as the door can be locked securely from the inside, it should not pose a safety or security risk.
Me: All of these reasons also apply to bedrooms.
GPT: [Asks for money.]
I certainly am not going to pay for more of these unthinking rote replies...
These rat-shit replies sure do suggest that GabbyPostTotaller is completely lacking in the kind of understanding that Melvin displays -- grasp of pros and cons of inward- vs. outward-opening doors, and the factors that affect which is more conventional and which more desirable in various situations. Fuckin GPT, man.
>completely lacking in the kind of understanding that Melvin displays <
I have been quite skeptical that GPT could display any true intelligence, and I still found this conversation to be disillusioning.
I think what GPT offers is conventional pablum. A blenderized version of the conventional wisdom with no concept of the real world behind it.
I can't blame GPT here since (1) the front door of my house opens *inwards* but (2) the front door of my workplace opens *outwards*.
So there isn't a universal rule about "all X open in Y direction". No wonder it's confused. Some doors open in, some doors open out. You wanna know why? Ask the freakin' builder!
Well, the actual answer to my original question is CONVENTION. But that requires the incisive understanding that GPT will never have.
If we're really looking for an answer, I guess the convention is that doors should open from the smaller space into the larger space? Bedrooms are usually larger than the corridor they open off, so the door opens into the bedroom so that the corridor doesn't get blocked by an open door. But the outdoors is always larger than the indoors, so exterior doors open outwards. On the odd occasion that a bedroom opens off a larger space, we usually still see bedroom doors open inwards because that's what we're accustomed to. In other cases not specified, the door tends to open into whatever place it will be least inconvenient.
>the door tends to open into whatever place it will be least inconvenient<
Good point. You are including the possibility that the door is left open which I did not consider. Clearly a bedroom door may be left fully or partially open.
The GPT is limited to those ideas that have been written about by humans. You are not. You can take your knowledge of human nature and apply it to the situation being discussed.
I did a similar thing. I took my knowledge that people smoke cigarettes and sometimes do this in bed. Ergo they can fall asleep with a lit cigarette and catch the bed on fire. The fire can smoulder for a while before the sleeper awakes to a bedroom filled with smoke.
What should they do then? Get down on the floor where there may be breathable air and crawl towards the door!
What happens when they reach the door? If the door opens inward, they must crawl backwards into the burning bedroom a sufficient distance to open the door.
https://www.doorstop.com.au/blog/why-do-doors-open-inwards#:~:text=The%20top%20answer%20is%20security,to%20check%20who%20is%20outside.
Excellent link. Thank you!
>Me: All of these reasons also apply to bedrooms.
I am amused at the concept of someone's bedroom getting snowed in.
"I am amused at the concept of someone's bedroom getting snowed in."
Well, if you're the Cold Genius from Purcell's "King Arthur", your bedroom is snowy:
https://www.youtube.com/watch?v=t7WehY5vTl4
What power art thou, who from below
Hast made me rise unwillingly and slow
From beds of everlasting snow?
See'st thou not how stiff and wondrous old
Far unfit to bear the bitter cold,
I can scarcely move or draw my breath?
Let me, let me freeze again to death.
Great!
No, not snow, but furniture. Furniture in the bedroom close to the door is a definite possibility. Furniture in the space outside the door is much less likely.
The weird thing is that the GPT program thought the possibility of a pile of snow was a good reason for the exterior door to open outwards!
Clearly it has no actual concept of the real world (unlike an autopilot program which needs a good model).
Does the April 5 due date mean that we have to get it in before April 5, or is April 5 OK?
Pretty sure April 5 is OK, and Scott posted that he will not be a stickler about time zones, so sounds like 11:59 pm in your time zone is OK. You're writing one too, huh? I'm now confident I will finish mine in time, but my ass sure is starting to drag. How are you doing?
Far, far too many people conflating the issue of the possibility/likelihood of machine superintelligence soon/at all, and whether or not a machine superintelligence would lead to human extinction.
I was reading Machiavelli’s “Discourses on Livy” (c. 1531) yesterday and found a line that seems like it might have come from Scott:
“It is not without reason that the voice of a people is compared with that of a God, for it is obvious that popular opinion is wondrously effective in its predictions, to the extent that it seems to be able to foresee its own good and evil fortune through some occult power.”
He drew magazine covers featuring flying saucer 'buses,' exotic land-sea cruisers, and parking places for personal helicopters.
Despite chronicling the technofuturism of the 1940s and 50s, this mysterious artist, Arthur C. Bade, is almost forgotten today.
https://fragmentsintime.substack.com/p/arthur-c-bade
(If you know anything more about his life, please reply, or comment on the blog post linked above - thanks!)
Thank you, this is marvelous stuff.
Two other talented artists from the era were Ray Quigley of Popular Science fame (remembered in particular for illustrating the Gus Wilson and the Model Garage stories), and Al Wiseman (who illustrated the Dennis the Menace travel-themed comic books).
https://www.pulpartists.com/Quigley.html
https://gus-stories.org/1948.htm
https://en.wikipedia.org/wiki/Al_Wiseman
https://alwiseman.org/
Thanks so much, NASATTACXR! Appreciate these two artist references and links!
(I never knew about Wiseman's and Toole's work on the Dennis The Menace comics themselves, until you noted this just now.)
Aron, very glad you enjoyed these!
I always prefered Wiseman's meticulous detail to Ketcham's work (which became increasingly scribbly with time). As a child I found Wiseman's depictions of '50s American suburbia to be incredibly evocative.
WOW! Went looking for some of those, and found this, for starters ... https://todaysinspiration.blogspot.com/2013/07/the-art-of-summer-reading-al-wiseman.html
Thank you so much! I have recollected many of the comic books those stories appeared in. (Dad made me sell the originals to a used book store when I was 13, and of course I've paid many times that since in buying replacements.)
Check out Al Wiseman's drawing of the flight deck of a Boeing 377 Stratocruiser:
https://bobistheoilguy.com/forums/threads/a-boeing-377-stratocruiser-cross-a-taxiway-bridge-at-j-f-k-airport-1951.366101/page-2#post-6418498
His technical detail was marvelous.
I have a few questions about AI that I haven’t seen discussed. I’d appreciate it if some folks have links that I can read about and/or have some answers.
1) I am confused about how an AI can have agency to act in its own interest. The paper clip scenario sounds more like a dumb machine run amok. I don’t consider that agency in the way we talk about humans making decisions based on what they think is best. I can sort of, kind of see how that might be possible. But talk of a super genius AI deciding to eliminate humanity for its own survival seems like a big leap from a chatbot.
2) Chatbots and image generators have made big advances in a short amount of time. Is the technology that underpins those transferable to other domains like self driving cars? My naive view is that there is a very large difference between making a finished thing like a response or image and understanding the entire space of possibilities in the real world. Bounded vs effectively unbounded domains. I will be more worried about AI once it can cope with real life instead of a highly controlled, limited domain with a single output.
ChatGPT doesn't have agency. The risk is either a different type of AI, or someone adding something on top of ChatGPT -- something for interaction with the real world (there are already plugins that allow ChatGPT to run computer code, or send e-mails), and then a loop that will keep telling the ChatGPT "make a plan how to make my paperclip startup more profitable, send e-mails or run code using these plugins if necessary". This is simplified a lot, but keep going in this direction and you might get some form of agency.
The idea is not that an AI would decide on its own to eliminate humanity, but rather that it was programmed to solve some task, and eliminating humanity is just a side effect of the most efficient way to solve the task. Basically, think about the AI as a very smart psychopath, and if it wants to do X, it considers all possible ways to do X, unless explicitly told otherwise (and there is a way to eliminate humanity that you forgot to forbid explicitly).
The fact that ChatGPT doesn't have hands seems much less relevant to me than the fact that it can talk about all kinds of things. It is easier to add hands to a chatbot, than to make an existing robot with hands think about various different topics.
I believe that computer programs need some capacity for self-protection, though it can be partly outsourced to other programs or to the computer it's running on.
Programs exist in a hostile environment-- there are programmers and users who don't mean to do any harm, and there are also trolls and malware.
I don't know that self-protection shades over into agency, but it could.
1) If an entity has any type of goal and is sufficiently intelligent, it will develop instrumental goals to achieve its primary goals. If it's difficult sophisticated at setting, seeking, and adapting said goals that looks indistinguishable from agency.
2) I don't know. I think not, but I really don't know
It will only develop instrumental goals if it is capable of doing so. It cannot magically invent new ways to solve a problem outside of the bounds of its maze. An agent can overcome obstacles in a 2d grid in very creative ways to reach a goal, but it cannot decide to move diagonally or reprogram the grid or anything bizarre like that.
Indeed, if it isn't superintelligent it won't be superintelligent
Current ai chatbots are already "speaking" English and interacting with the real world in some small ways (especially over http), they aren't exactly a 2d maze solver
Thats my point, no one has proved "superintelligence" is a reasonable possibility for ML models beyond the realm of sci fi. So it is not clear why OP is so confident in suggesting instrumental goals will be easily created by future AIs when current ones cannot form associations between two distant goals, even with random search.
What's your point? That much is true of any agent, including humans
You're conflating two separate issues
1. The possibility of machine superintelligence/its likelihood in the near future
2. The danger posed to humanity from a machien superintelligence
Also, 'acting in its own interests' is misleading. 'Optimally staisfying its utility function' is better. Because it's not that a machine becomes like a person and values its continued existence for its own sake, and that this makes it 'happy'. It has been programmed with goals and will try to achieve those goals. If it's goal is to destroy earth, then destroying itself in the process may be aligned with its goal of destroying earth, even though self-destruction is ostensibly not 'in its own interests' (though a smarter/more powerful system would likely spread beyond earth so as to exist beyond the attempted destruction of earth to ensure it has happened and develop contingencies).
> It has been programmed with goals and will try to achieve those goals.
*If* it has been programmed with goals , it has been programmed with goals and will try to achieve those goals.
Does anyone have much experience with using ChatGpt to summarise scinetific papers? Good, okay, bad?
Not great in my experience (3.5). The system likes to focus on a block of text that sounds or is shaped like an abstract, and paraphrase the hell out of it. Other transformer based language models seem similar, but I only have a small sample size.
Do you guys know of any literature review of randomized field experiments in social work and/or social welfare programs ? Thanks in advance !
Somebody needs to tell Yudkowsky not to wear Fedroas on podcasts...
Would also be helpful if he didn't do that weird demonic grinning face when giving answers, but I suspect that will be harder to change than headwear choices
Where can I see this podcast?
https://www.youtube.com/watch?v=AaTRHFaaPG8&ab_channel=LexFridman
Not sure if this is the one but he is sporting a fedora in it.
For the record I don't care about his style or facial expressions.
He is über smart but I happen to disagree with him on this one thing.
https://www.youtube.com/watch?v=LWebKGrFjcM
I don't care either but most people do
I don't care about his style either, but do mind that he's not a good explainer. Whatever it is that good college lecturers have, he just lacks it. Can't say things in a crisp, clear, memorable way. I'm halfway through that interview and both Yudkowsky and Fridman are driving me crazy. I'm in the middle of a long patch of doldrums, which seems to have lasted about half an hour so far, in which Yudkowsky tries to get Fridman to say whether and to what extent he agrees with Yudkowsky: Does he think AI poses *any* big risks? If so, how likely does Fridman think they are to happen? Fridman absolutely refuses to answer, and spouts a bunch of bullshit about how he's into empathy. If the person he's interviewing believes (some ridiculous thing, I forget what), he sees it as his job to sort of mentally role-play what it is like to believe that, rather than pay any attention at all to his own beliefs about whether there is a lot of truth, a bit of truth or no goddam truth at all to the person's belief. A skillful interviewer would have drawn Yudkowsky out in a way that avoided the conversation turning into one about whether Fridman agrees with Yudkowsky. Since he was not skillful enough to avoid this turn in the conversation, I wish he would just come out and say what's clearly the case, which is that he thinks Yudkowsky's views are either fully wrong or quite likely to be wrong. But he doesn't have to guts to do it. OMG. It's really a terrible interview, and goes on for 3+ hours.
Few writers are good at improv. This should not cause us to think less of their work, though maybe it would be best to let writers focus on writing and find people with other skills to appear on podcasts.
It likely makes people new to his work think less of it, even if it's subconsciously.
It doesn't make me think less of his work. I'm vexed because I want to understand this stuff better, and I'm not getting much benefit in that area from the podcast. Also vexed because, thinking practically about the problem of persuading people to take AI risk seriously, it's just very unfortunate that THE key figure here is not a good explainer and also lacks charm and charisma. I admire him for putting so much of himself on the line. I'm sure he's aware of how he comes across. I hope being more publicly visible and publicly critiqued isn't painful for him.
I do feel a bit bad for him about the making faces. It's clear that that's happening because he's not what's currently called 'neurotypical'. I know because I have a similar problem, though not as bad, basically finding it hard to fully play the role of a 'normie' in terms of body language or social scripts. It sucks becuase it makes a lot of people immediately not take you seriously, and I suspect that's worse for him.
The fedora is on him, but then again, at the end of the day it really is just a damn hat.
The thing is there are many in the public face of AI research who are not "neurotypical" some of whom make fun of "normies." I don't think it makes people not take them seriously, rather, it causes a worry that these superimportant decisions ---whether and how fast to further develop AI in the interest of humanity at large --are made by people who might be brilliant but are not very well aligned socially or feel uncomfortable in their skin. Please this is not a criticism of aneurotypicals rather a call for broader participation. Tegmark says as much in his video on about Superintelligence on Twitter.
FFS Elizer, I can look at my cat and tell you there is ‘someone’ in there. I can look at the structure and scheme of GPT4 and tell you there is no one in there.
I agree. GPT is about as sentient as a giant Hallmark card.
Unless you believe in souls/immaterial minds, or that there is something almost magical about neurons that allows them and them alone to produce consciousness, consciousness simply has to result from some physical process or processes, such as information processing, electrical oscillation frequency, or some other function performed by neurons.
If you don't believe in souls/the uniqueness of neurons, then other things simply have to be capable of generating consciousness, and except for biological systems which evolved with similar nervous systems to us and the ability to display e.g. emotion with their faces, you would have no way of knowing it.
Now, if you do think that you can't have consciousness without neurons, then you'll have to explain why and how.
This is a fundamentally subjective response, that convinces no one. There are people who look at GPT4 and will tell you someone is in there, and there are people that will happily slaughter cats.
I know. It is at least partially my own subjective take.
I still believe I'm right though. At this point feel free to call it something lame like spider-sense if you want. It is based on my own earlier formal study of AI and meditative contemplation of the nature of consciousness and agency.
If I ever do a long form explanation of my beliefs I'll link to it, but my gut (sorry, I know, subjective) tells me to give that effort lower priority than a lot of other things right now.
Edit
I do touch on one item in my response to Shion below,
I think he's wrong about most of the AI x-risk stuff, but to me I think there is a complexity here:
I agree with you, that there probably is 'someone' in your cat (for a certain definition of 'someone'), and I don't THINK that's the case for GPT4, but I'm less confident about that than i'd like to be.
I think that the only reason that humans know that humans are conscious is that we directly observe the experience of our own consciousness, and thereby know that other humans are conscious too, because they are strucutured almost identically to ourselves. I don't think we would be able to know nearly as easily if 'we' were aliens looking at humanity from the outside.
I was studying AI as an undergrad in the mid 80’s with a special interest in computational linguistics. At that time the language abilities of GPT were unimaginable. The problem as it was seen then is that language is a very complicated thing and if any serious progress was to be made in things like machine translation between natural languages, we would have to develop a system that had its own understanding of a model of the physical world.
Amazing progress has been made but current AI systems have *no* understanding of a model of the physical world. They use brute force statistical methods to do a clever job of token prediction and that is about it. They *know* nothing. They *want* nothing more than they have been programmed to ‘want’.
This is not intelligence, artificial or otherwise. It’s a clever *simulation* of such but that is all.
Forget about cats for a moment and look at the .3 mm in length trichogramma wasp. It has no more than 10,000 neurons but it it exhibits complex social behavior.
Show a way to code something up with 10k basic units that is driven to use clever ways to find proper sustenance and a mate in order to reproduce and I’ll rethink this whole idea.
https://en.m.wikipedia.org/wiki/Trichogramma
As AI risk becomes more mainstream, the way we present ourselves to the wider world is going to be more and more important.
This is what Yud is mouthing off about on Twitter today: https://twitter.com/ESYudkowsky/status/1643007019522359296
We cannot have people like this be the face of our cause.
Sure, some will say that we shouldn't judge a book by its cover, etc, etc, but the reality is to succeed we will have to appeal to normies, and normies love their vague heuristics and cognitive distortions. We can no longer just hole up on obscure websites and applaud each other for achieving More Rational Ways of Thinking. If capital-R Rationality cannot adapt to this movement, it needs to fade away and let a new approach take over.
I'm gonna shamelessly plug my soon-to-launch substack where I will write about this topic precisely. https://rubber.substack.com/p/the-rubber-hits-the-road
Hypothesis: Yud actually agrees with the whole Roko's Basilisk thing, and is playing his part to ensure that superintelligent AIs will be brought into this world by destroying the possibility of a sensible and respectable AI alignment movement.
I wrote about the failures of rationalism here: https://squarecircle.substack.com/p/no-fire-in-the-equations. It was so spicy it got me banned from LessWrong. And I also spoke to you on reddit.
Some tips on starting a blog: it's hard going if you are not starting with a following from elsewhere. And self-promotion can get your posts taken down in certain subs. But that said, you never know where this may lead. It really changes how you think.
You also might want to read this:
https://erikhoel.substack.com/p/how-to-get-2000-substack-subscribers
He probably figures that people who don't like him wearing fedoras won't like any of his writing anyway.
Writing, no, but a lot more people are capable of being sympathetic to the AI-risk case and seeing some skinny fat neckbeard dude wearing a fedora inside has got to have a negative effect, even subconsciously. And I say that with no disrespect to Yudkowsky.
By his own admission he doesn't understand people very well, so he should probably hold off on making these sorts of assumptions.
What I know about humans is that if you're trying to influence human behaviour, it's best to do it from the position of being (or at least appearing to be) a high-status human. Failing that, try looking like a medium-status human. What you definitely don't want to look like is a low-status human, and the "indoor fedora" look makes you look like a low status human, the kind of human that most humans will instinctively recoil from lest they wind up in a conversation about Warhammer figurines.
OMG, warhammer figurines made me laugh. Thank you for that.
Belated response to Machne Interface re: non-octave tuning systems:
Dividing the twelth (i.e. a 3:1 ratio) equally into twelve macrotones is called the Bohlen-Pierce scale.
You can use it to make music that sounds like this https://www.youtube.com/watch?v=MzgUkhViIFw
Nora-Louise Müller also made a clarinet in this tuning (you need a cylindrical bore instrument otherwise the even overtones clash).
Other non-octave systems include Wendy Carlos' alpha, beta and gamma scales.
Harry Partch is a fun one
https://youtu.be/KwLeCO2w9H4
Scott is looking prophetic about those sn-risks https://twitter.com/nexta_tv/status/1642369190932754438
It took me a moment to realize that it was posted on April Fool's Day.
Does anyone know where I can find weather forecasting datasets that include the outcome of the thing attempting to be forecast? For example, a dataset with probabilistic predictions of whether it will rain the next day in a certain area, and also the outcome of whether it ended up raining or not?
I'm interested in doing some analysis to see how accurate weather forecasts are, but it's been surprisingly difficult to find this type of data.
For model forecasts and the observations from stations in Europe, they are usually available if you starting digging various weather service institutions' websites, though observations and forecasts might behind different files / APIs. The main limitation is that people putting them up often assume you more or less know what you want and what to do with it.
For instance, FMI open data may be of some use https://en.ilmatieteenlaitos.fi/open-data-sets-available ; they even have a manual these days: https://en.ilmatieteenlaitos.fi/open-data-manual . The Swedes appear to have their up here https://www.smhi.se/en/services/open-data/search-smhi-s-open-data-1.81004 though it looks like they are lazy in translating to English and you'd need Google translate.
If you want to go super deep, there is Copernicus for obtaining satellite measurements, https://www.copernicus.eu/en/about-copernicus
EDIT: I should add, one complication is that output of the forecast models is usually gridded but station data is time series.
One standard complication for weather forecasts - a forecast of 50% chance of rain sometimes means they are *certain* it will rain in *half* the region their forecast covers. You'd want to be able to properly account for that sort of thing here.
Where I am even a 90% chance of rain just means there's a 90% chance there will be at least a little rain at some point in the 24 hours. Not uncommon at all to see 90% chance of rain, and expected total 0.5". Really different from a forecast that it will rain 90% of the time tomorrow, and using percent to indicate the latter really seems like a more useful convention.
In theory you should be able to access the historical outputs of the major forecasting models (like this https://weather.us/model-charts). But i am only assuming the organizations store their past forecasts. I have no idea if you are able to access them.
And i you want like local news forecasts - no idea. Old newspapers will have some, though not in machine readable format probably.
Plumber, this one's for you (and I was never so glad to be living in someplace that is not "up and coming"):
https://www.youtube.com/watch?v=eDs8Sy8tJ4o
Question mostly for famous people: how do you deal with requests for sentences on book covers? (ie "This is a great book about its topic, you should read it - Scott Alexander, blogger")
I've had some people ask me, I usually feel like it's a pretty strong imposition to ask me to read the whole book about a topic I might not care much about in the couple of weeks they give me until they need an answer, but I also feel uncomfortable giving a blurb about something I haven't read. Does anyone have a system they're happy with?
My favourite example of this is when Nassim Taleb provided a quote for Rolf Dobelli and then discovered that Dobelli had just ripped off all his ideas https://fooledbyrandomness.com/dobelli.htm
I get asked for this a lot. If it’s the publisher asking me, I ask for my free review copy of the book. If it actually looks interesting to me, I read and provide a blurb; if not I ghost or just tell the publisher that it’s too far from my interests and they should ask this other person or whatever. If it’s the author asking me, similar policy, except I feel worse about saying no.
My method was to only ask people who I knew had already read the book in some form, but that’s kind of cheating.
Some people are very uptight with it. Some people are very loose. I haven't noticed much of a difference in reputations either way. Knowing your personality (insofar as I do) I'd set up a system with explicit lead time.
Something like, "If you want me to blurb or otherwise talk about your book please submit a final draft of the book with at least two months of lead time. I will try to read it and if I do finish it and think it's good I'll give a blurb. But there's an equally good chance I won't. This has nothing to do with your book's quality: sometimes it's just not my area of interest or ability."
"This book was released on schedule and contains minimal spelling errors."
"This book fulfills its claim to be a book."
I received this book without paying for it or even asking for it. It exceeded my expectations.
Not famous, but I wrote a book that was blurbed by some some well-known intellectuals. I wrote two of the blurbs, and they were signed by the intellectuals. In those cases I had relationships with the two, and they're the ones who suggested the arrangement. That's obviously not the situation you're in. I have no idea whether or not you find this surprising. It is a common arrangement.
A semi-famous novelist once told me he got a blurb by a much more famous novelist on the back of one of his novels. The way it happened: the much more famous novelist sent him a letter saying: "Here's a blurb (something very positive) from me that you can put on your new novel. If you don't like it, write whatever you want and put my name on it. Just be sure to put my name on it." The semi-famous novelist's point was that when you see a name by another author on a book jacket, it is usually more about promoting the author of the blurb than it is the book. This explains why, for instance, Norman Mailer's name appears on the back of so many novels from the mid-50s through the mid-70s.
You clearly aren't in the self-promotion game, but perhaps you could tell the requester: "I won't have time to read the book soon, but what do you want me to write? Give me three examples and I'll tell you if I'm OK with your putting my name on one of them." This puts the ball back in their court, and then if you aren't comfortable with any of their suggestions without having read the book, say no. That wouldn't be rude on your part because you gave them a fair chance to
come up with something.
Always thought a lot of those quotes were less than sincere, but fascinating to hear they're used as symbiotic marketing.
I am fortunate enough to have a similar problem. My heuristic is that if I’m not interested in the topic, I politely express my gratitude for the request and my regret for being unable to help. If I am interested, I read it the way I read anything that interested me when time is short - skim, pick one or two chapters that are extra interesting, and read these in-depth. Then I write a nice quote.
So far you can find my quote on exactly two books, and I stand behind my words for both (working on my third quote).