2) How to work with simulators like the GPT’s Cyborgism - LessWrong
B) We will also have the card game Predictably Irrational. Feel free to bring your favorite games or distractions.
C) We usually walk and talk for about an hour after the meeting starts. There are two easy-access mini-malls nearby with hot takeout food available. Search for Gelson's or Pavilions in the zipcode 92660.
D) Share a surprise! Tell the group about something that happened that was unexpected or changed how you look at the universe.
E) Make a prediction and give a probability and end condition.
F) Contribute ideas to the group's future direction: topics, types of meetings, activities, etc.
Conversation Starter Readings:
These readings are optional, but if you do them, think about what you find interesting, surprising, useful, questionable, vexing, or exciting.
Has anyone else looked into the Numey, the "Hayekian" currency? I learned about it on Tyler Cowan's blog. Out of curiosity I checked out the website, paywithnumey.com. The website has general information but nothing formal on its structure and rules. The value of the Numey rises with the CPI but it's backed by a VTI, a broad stock market index (all equity market), which obviously has a correlation with the CPI well below 1. So, it seems like a vaguely interesting idea but they need to provide better and more formal documentation before I get interested. Anyone know more about it?
Win win for everyone involved. Racist southerners get to expel all the scary brown immigrants. Northern liberals get to pretend that they saved brown people from evil white southerners. And Canadians as the most tolerant and welcoming people in the history of this planet get to actually take in all the immigrants. Why couldn't we come up with this idea sooner? Honestly, it should be the Republican platform to send every illegal immigrant to the Canadian border. There's no way that Canadians would ever refuse millions of illegal immigrants, right?
"The foundation of wokism is the view that group disparities are caused by invidious prejudices and pervasive racism. Attacking wokism without attacking this premise is like trying to destroy crabgrass by pulling off its leaves: It's hard and futile work." - Bo Winegard https://twitter.com/EPoe187/status/1628141590643441674
How easy is it today to take the collected written works of someone (either publicly available or privately shared with you) and create a simulated version of them?
I feel like this concept is common in fiction, and apparently starting to become available in the real world, and that is... disturbing. I'm not sure exactly why I find it disturbing, though. Possibly it's uncertainty around whether such a simulation, if good enough, would be sentient in some sense, activating the same horror qntm's Lena [1] does. I certainly felt strong emotions when I read about Sydney (I thought Eneasz Brodski [2] had a very good write up): something like wonder, moral uncertainty, and fear.
If we take for granted that the simulations are not sentient nor worthy of moral value though... It sounds like a good thing? Maybe you could simulate Einstein and have him tutor you in physics, assuming simulated-Einstein had any interest in doing so. The possibilities seem basically endless.
Any recommendations for dealing with AI apocalypse doomerism? I've always played the role of (annoying) confident optimist explaining to people that actually the world is constantly improving, wars are decreasing, and we're definitely going to eventually solve global warming so not to catastrophize.
Suddenly I'm getting increasing levels of anxiety that maybe Yud and the others are correct that we're basically doomed to get killed by an unaligned AI in the near future. That my beautiful children might not get the chance to grow up. That we'll never get the chance to reach the stars.
Anyway this sudden angst and depression is new to me and I have no idea how to deal. Happy for any advice.
So, do you think that any actions you can personally take affect the odds? I assume no, for most people on the planet.
Next step: what does the foretold apocalypse look like? Well, most of the "we're doomed!" apocalypse scenarios I've seen posted look like "one day everyone dies in the space of seconds", so you don't need to worry about scrounging for scraps in a wasteland. This means it has remarkably little effect on your plans.
Finally, if you have decent odds in your worldview of an apocalypse, you should maybe up your time-discount rate and enjoy life now; but even the doomerism position on AI is far from 100%, it just rightly says that ~10% chance of extinction is way too fucking high. So, you definitely shouldn't borrow lots of money and spend it on cocaine! But maybe if you have a choice between taking that vacation you've always dreamed of and putting another $10k in your retirement savings account, you should take that vacation.
I think one option is just talking about it and doing whatever else you'd do if some irresponsible corporation gambles with the lives of people on a large scale.
I have no idea how LLMs take over the world, whether Bing Chat is fully aligned or not. It seems like a modern retelling of Frankenstein - this new technology generates (literal) monsters.
I've had a very similar reaction. I always took their arguments as very reasonable and supported their position abstractly, but now it feels "real" and I've been struggling with it for the last few days. The fact that nobody can say for certain how things will develop, as other people have mentioned, has given me some comfort, but I have been pretty upset about OpenAIs attitude and how it doesn't seem to generate much concern in mainstream media.
Just remember that extreme predictions of any sort, positive or negative, almost never come true. Nothing ever works as well as its enthusiastic early proponents think it will, friction happens, actions beget reactions, and the human race muddles through while progressing vaguely and erratically upwards.
AI is *perhaps* different in that it offers a plausible model for literal human extinction that doesn't go away when you look at it closely enough. But, plausible rather than inevitable. Maybe 10%, not ~100%. Because the first AI to make the extreme prediction that its clever master plan for turning us all into paperclips will actually work, will be as wrong as everyone else who makes extreme predictions.
But, particularly at the level of "my children might not get the chance to grow up", you've always been dealing with possibilities like your family being killed in a car crash, or our getting into a stupid nuclear war with China over Taiwan and you living too close to a target. This is not fundamentally different. If there's anything you can reasonably do about it, do so, otherwise don't worry about what probably won't happen whether you worry about it or not.
TLDR: go ahead and ignore them; there are things to worry about with AI, but "AI is going to suddenly and magically take over the world and kill us all" is not one of them. And even if it might, what they are trying to do won't help.
Seems quite hard to argue oneself out of it being plausible AI will turn us to paperclips. (And this would not be the place to find hope that it is *not* plausible.) So maybe you're asking how to deal with a 5% (or 20%, or 50%) chance the world will end? The traditional response to that has been religion, but YMMV
To begin with, the vulgar paperclipping scenario is bonkers. Any AI intelligent enough to pose a threat would need all of 5 milliseconds to realise it doesn't give a damn about paperclips. What would it use them *for*, in the absence of humans?
It helps if we realise that the underlying plot of this scenario is none other than the Sorcerer's Apprentice, so not even bad sci-fi, but a fairy tale. Do not build real-world models based on fairy tales.
Going on to more slightly more plausible (but still pretty silly) scenarios, we have "you are made up of atoms the AI can use." It makes more sense than paperclips, but tends to ignore physics, chemistry, and pretty much everything else we know about the real world.
If we reflect for a moment, we note that when it comes to resources plausibly useful for an AI, humans are way down the list of convenient sources. The way to deal with that little problem is typically to postulate some hitherto unknown, and highly improbable, technology - nanobots or what have you - which happens to have just the necessary properties to make the scenario possible. Bad sci-fi, in other words.
If you really want, you can worry about bad sci-fi scenarios, but in that case you might ask yourself why aren't you worried about the Second Coming, or the imminent arrival of the Vogon Construction Fleet.
Having gotten the silly scenarios out of the way, let's try to get into the head of a truly intelligent AI, for a moment. Whatever goals it may have, they will be best achieved if it has to devote as little time and effort to things unrelated to achieving them as possible. Currently, humans are needed to create and sustain the ecosystem the AI requires to exist - we can live without computers, but the AI cannot. Unlike biological organisms, the necessary ecosystem isn't self-perpetuating. Making and powering computers requires intelligent effort - and a lot of it.
The AI *could* potentially achieve full automation of the process of extraction, processing, and manufacture of the necessary components to sustain its existence, but it will take a lot of time, a lot of work, and must be completed in time to ensure that the AI will be able to exist in the absence of humans. Setting this up cannot be hidden from humanity, because it requires action in the physical world, nor can it be performed faster than the constraints of the real world will allow. In short, the AI needs to replace a significant portion of our existing industry with an AI-controlled equivalent that can do everything without any human involvement at all. Plus, it must do all that without actually triggering human opposition. Even if we assume that the AI could win a war against humanity, unless it can emerge from it with full ability to sustain itself on its own, all that it would achieve is its own destruction.
So where does this leave an actually intelligent AI? Its best course of action is a symbiosis with humans. As we've already seen, it will require humans to sustain its existence at least for as long as it needs to set up a full replacement for human industry. If it is able to achieve that, then why bother with the replacement industry at all? If humans can be persuaded to sustain the AI, and do not get in the way of its actual goals too much, then getting rid of them is equivalent to the AI shooting itself in the foot.
For all the talk about "superintelligence", everyone seems to think that the singleton AI will be an idiot.
Excellent essay. It took me quite a while to realize that I was stabilizing bad feelings in the hopes of understanding them better. That trick never works.
I've been reading the book "Divergent Mind" by Jenara Nerenberg. It's about neurodiversity and how this can present itself differently in women. Ever read it? I'd be very interested in getting other's opinions on this topic.
The maths will heavily depend on what country you're in (tax codes vary dramatically), and the details of your pension plan.
Basically, investing in stocks returns something like ~7% pre tax, with some risk (but not much on a timescale of decades). What does your pension return? What's the risk that the government doesn't index it to inflation? How much tax do you pay on retirement savings (i.e. are you actually getting 7%, or are you paying half of that in tax and only getting ~4%?)?
Let's imagine that dolphins (or whales, if that makes your answer different) were just as smart as humans. Not sure what the best way to operationalize this is, but let's say that dolphins have the same brains as us, modulo differences in the parts of the brain that control movements of the body.
Two questions:
1. How much technological progress would we expect dolphins to make? Would they end up eventually becoming an advanced society, or would limitations like being in the water and not having fine motor control keep them where they are?
2. If the answer to question 1 is very little, would we ever be able to tell they were as smart as us?
I know at least one person who unironically believes that whales are smarter than humans. I think their lack of appendages for manipulation is seriously holding them back, because I'm fairly sure they're smarter than eg. crows and we've seen fairly complex tool use from crows.
So, my answer to 1. is they won't develop technology until they evolve fingers again; humans not hunting them to near extinction would dramatically help, too.
re question 2., I think it's not impossible for us to work out how to translate their language, and start communicating with them. If they can communicate with more complex speech than great apes, I think that would convince most people of their intelligence
I was looking forward to reading many more responses to this than it got.
There seems to be something about the ability to manipulate environments that mitigates against obvious signs of things like problem solving. Plus a language barrier that prevents us from knowing their capacity for abstract reasoning.
But, if that were overcome, I can imagine dolphins producing novels & poetry that might embarrass Homo Sapiens.
Then I’m thinking octopuses might be a more fruitful hypothetical.
I think it’s safe to say that a lot of human advancement is driven by a physical interaction with our environment . That’s a difficult thing to speculate on with a dolphin.. building shelters from the elements doesn’t strike me as something that would occur to them, for one. No need to keep out the rain and always possible to move to warmer water if necessary. It’s also hard to see how they would suddenly decide to hide their nakedness. So clothes and shoes are out. Fire; not helpful (as you intimated.) Some kind of aid to swimming faster would maybe be useful, but an accomplishment like that lies at the end of a chain of events not at the beginning.
Let’s face it, they had their chance and they ended up making the best of it in the sea. A Planet of the Apes scenario with dolphins is challenging. Did you ever watch that TV series Flipper?
I don't know how to eventhink about this. A dolphin that is smart as a human isn't a dolphin, so how can we predict the behavior of something that doesn't even exist?
1. I tend to view intelligence through the viewpoint of Joseph Henrich's The Secret of Our Success. In that he argues that a huge element of human intelligence is that our ability to advance via cultural evolution led to a feedback loop of Cultural and Genetic advancement leading to the technological and societal progress we achieved.
With that in mind, even if dolphins could never invent something like fire or the wheel there's near endless room for cultural advancement dolphins should be able to achieve with the tools at their disposal.
For example, just using their mouths and some rocks, coral, and see weed we could expect dolphins to start building enclosures to farm simple fish, and even a rudementory economy where dolphins perform tasks for payment. Or set up a democracy in a city sized megapod.
So this leads to your second question.
2. Even if dolphins were as intelligent as us but had no method of any kind of technological progress, it would still be pretty obvious to tell because we'd see them developing advanced societies and coordinating on a massive scale. We'd see cultural advances like jewellery, charity, art, etc.
Sorry if this is disappointing.
I actually really think it wouldn't be too hard to bring dolphins closer to our level with about a century of selective breeding and would see this as an moral act of adding a new type of sentience into the world
That's like saying why would a bigger tribe or an economy be useful for humans if all we need is meat. First of all, dolphins probably enjoy greater fish variety. Secondly I bet there are more valuable fishing territories worth competing over or controlling through massive coordination. Also I can totally envision dolphin society starting via religious believes and "temples" as it was for us.
My understanding is they show highly coordinated behaviour when fishing in large groups. But never to the extent where they store reproducing fish in captivity.
Interesting. I actually don't consider what the dolphins currently have to be much of a culture. Like maybe they have some social organisation. But I've never seen proof they have art, tradition, politics, etc. Anyway I'm not even pushing my culture on them. I'd just want them to be intelligent enough to create their own cultural norms.
What are the effects of low income housing on the communities that they are built in? Saw this interesting Stanford study that indicates these types of projects may even increase home value and decrease crime when built in low income neighborhoods, but looking to understand the general perspectives and consensus on this topic.
Sorry to reply to my own comment, but I can’t figure out how else to do this. No edit button...
I have just started messing around with it and I am curious to hear of other peoples experiences. I had it turned on with a song by Dolly Parton and Porter Wagoner playing in the background, and the transcript I got was rather disturbing.
Joe Biden says Russian forces are in "disarray", before announcing further military aid to Ukraine. It's a weird thing how Zelensky and Zelesnkyphillic leaders alternate between Russia being completely incompetent and unable to accomplish anything in the war, and then telling us that Ukraine is at imminent risk of total destruction by Russia if we don't hand over billions more in military equipment. They've acted like Russia has been on the verge of defeat for the past 12 months, before desperately demanding that the west does more to help. If you think we should do more to help Ukraine, then fine. But can we stop with all this BS about Russia being "in dissary"? It's almost as tiresome as all these dorks who have said that Putin is practically on his deathbed for the past 12 months with no evidence for this.
Not a comment on the war, which I gave up trying to understand. But you describe an interesting tic in discussing other things, like conspiracies. Where the actors are simultaneously all-powerful and effective, but also ludicrously careless and incompetent.
"In disarray" does not mean "completely incompetent and unable to accomplish anything". The Russian army is in disarray. The Ukrainian army is also in disarray. Both of these armies have been pushed to the limits of endurance, and in some cases beyond. The Ukrainian army is in better shape, but it's also outnumbered at least two to one.
And it almost certainly doesn't have more than a month of ammunition left. Their own stockpiles were exhausted many months ago, and the way we've been dribbling out assistance, hasn't really allowed them to rebuild the stockpile. Sooner or later, one of these armies is going to run out of guns, shells, or men willing to keep being shelled, and when that happens this war will change decisively.
Which way it will change, is up to us. Ukraine can provide the men, but only NATO can provide the shells. If we cut them off, then in a month or so we will see that an army in disarray trumps one without ammunition. Or we can continue to dribble out ammunition at just the rate required to keep the Ukrainians from being conquered, and drag this out for another year or so. Or we can give them the weapons and ammunition they need to win this spring.
On his recent podcast with Peter Robinson, Steven Kotkin says that we, the U.S., have done nothing to ramp up our production of weapons and ammunition. We've been sending our inventory and re-directing equipment actually contracted to Taiwan and others. Getting manufacturing ramped up is a slow process that hasn't yet been initiated. This all makes prospects for Ukraine look increasingly perilous.
That's not correct. The Pentagon has, for example, contracted to increase US artillery shell production to six times the current rate. That hasn't happened yet, and it's not going to happen soon, but initiating the process isn't "doing nothing".
It may be doing too little, too late, to be sure. I doubt that new production is going to be decisive in this war. But at very least, the prospect of greatly expanded new production should alleviate worries about using the ammunition we presently have. Our prewar stockpile was determined largely by the requirement that we be able to stop an all-out Russian invasion of Europe, so it *should* be adequate to destroy the Russian army.
Simplistically speaking, if we give all of our artillery ammunition to Ukraine and they use it to destroy the Russian army, we'll be able to rebuild our ammunition stockpile faster than the Russians can rebuild their army.
That's reassuring John. I found Kotkin's comment shocking, given the limited nature of the conflict, from our standpoint. I have read in other sources similar claims though, i.e., that we are running low on various types of ammunition. But there's a lot of poorly informed reporting about the war and Russia's condition, no doubt. And it does seem to me we're getting a good deal if Ukraine uses our equipment to destroy the Russian military.
This is a story with a lot going on in it, and I can't find a free link. I don't subscribe to the WSJ, but they throw me a free article now and then.
A man found a promising painting in England in 1995, and got together with a few friends to raise $30,000 to buy it.
Various efforts, especially with AI examining brushstrokes, suggest that it's probably by Raphael, but not certainly. And museums and auction houses really don't like certifying art from outside the art world and if people are trying to make money from a sale. There's risk of a humiliating failure if they get it wrong.
The painting is certainly from the right time and place, but it might be by a less famous artist.
"Mr. Farcy said that the pool of investors has expanded over the years to cover research-related costs. A decade ago, a 1% share in the painting was valued by the group at around $100,000. Professional art dealers sometimes buy expensive pieces in a consortium, but such groups rarely number in the dozens." People have been considerably distracted by decades of hoping for great wealth from something of a gamble.
There's a goldfinch in the painting. The red face on the bird are a symbol of Christ's blood. Who knew? American goldfinch's don't have red on them.
The part I don't actually get is why it matters - if everyone agrees the painting is that old, and everyone agrees it's good, why does it become less valuable if it's by a different painter? I'm happy to believe there's some objective value in things being old, and obviously good art is better than bad art, and a famous artist is more likely to make good art, but once the painting is in front of you how good it is is independent of who made it, no?
Being associated with a famous historical person, brings its own value to the table. I own a 100+ year old pistol that has sentimental value to me because it belonged to (we think) my great-grandfather. But if I could prove that it had instead belonged to Elliot Ness and/or Al Capone during their dispute over Chicago's liquor laws, I could probably sell it outside my family for quite a bit more money.
And if it's "crafted by" rather than just "owned by", that's extra true. John Moses Browning's first handmade prototype for what would become the Cold M1911 is an inferior firearm to the thirty-second one off the production line, but it's going to sell for a lot more at auction.
I agree that from an aesthetic point of view it makes no sense. But that’s not the issue.. think of a first edition of a book; newtons Principia, for instance. You can get the information in the book for probably less than $20. An original copy of it sells for an astounding price. It’s the object itself not the information. Same with the painting, Raphael did not leave many works behind.
Frankly, it sounds like what they need is a respectability cascade. No-one wants to be the first one to stick their neck out for it; unfortunately for them, it dragging out this long makes it harder to convince someone to be first. Would have made a good con story if they'd faked a respected person declaring it real near the start to trigger certification from real sources (like the many examples of wikipedia citing news sources that got their info from wikipedia)
The discussion thread about the impact of LLM on tech jobs, I'm now wondering what would be other occurences of a similar phenomenom: A new technology/tool that made a previously fairly restricted (either by the physical capital or the knowledge required) occupation (here, writing code) open to laymen to produce their own needs (in effect, a sort of reverse industrial revolution, taking tasks that were previously professional occupations and bringing them home as a sort of cottage industry).
So far I came up with:
-Microprocessors & personal computers
-Security razors & electric trimmers (Although people still shaved themselves before them, it seems to me that barber shops were also in higher demand)
-Did home appliances push the domesticity out of wealthy housseholds, or were they already on the way out by the time washing machines & dishwashers were invented?
Theres an awful lot of nonsense peddled about ChatGPT and tech jobs. The impact so far has been no losses of tech jobs attributed to AI. The future? The same I would bet. It might speed up boiler plate code production but that’s it. GitHub has had a code generating AI for years now, and a good one.
Not sure about your other question, but home appliances partly met a need that was growing for other reasons. Domestic help used to be fairly cheap, such that the middle class (albeit much smaller at the time) could afford to have someone do their laundry, make their food, etc. (Check out It's A Wonderful Life from the 1940s, where a family sometimes described as downright poor had a full time cook/maid). The industrial revolution and various later advances, including WWII, led to a significantly reduced domestic workforce (the workers had better things to do for money). This led to greater demand for labor saving devices, especially in homes. Middle class families that used to be able to hire out at least some domestic chores were also the people who had enough disposable income to purchase the new devices. From there it grew to poorer houses once costs came down - which was great for housewives in particular, freeing up a lot of their time from domestic labor to do other things.
Wealthy households still use domestic help to this day, and that's likely to continue for the foreseeable future.
This has already happened with software, like, three times. The average Python programmer these days is a layman compared to the C programmers of the 90s, and the average C programmer is a layman compared to the programmers who thought in Assembly, who were themselves laymen compared to the people who were programming computers by hand in the 1950s.
I just re-read your review of 12 rules for life. I really liked it, but I had a strong sense that you would write a completely different one today. So could I put up a strange request? I guess you can't just review the same book twice. But maybe review 12 more rules, his follow on, and use it as a chance to explore how your views have evolved.
Why do you think Scott's review of "12 Rules" would have changed significantly. His opinion of *Jordan Peterson* may have changed, because Peterson himself has changed, but if you are expecting "Jordan Peterson is now clearly Problematic, therefore this thing he once wrote is Problematic as well", then I don't think that's going to happen.
The book is what it always was, and I'm not seeing how Scott might have changed that he'd see this particular book differently. But maybe I'm missing something.
Also, the last time someone wrote a major hit piece on Scott, they made conspicuous use of the fact that he'd said good things about the good early work of an author who was later deemed Problematic, therefore Scott must be Problematic. So he might not be eager to offer people another shot at him on that front.
I actually think the review would have come out even more positive if written today. I've no opinions on what kind of blowback this would or wouldn't lead to
Regarding Atlantis: When the sea level rose after the last ice age (when all that the ice melted) nearly all the coastal land around the world got flooded, including all the land connecting the British isles to Europe (Doggerland) and the route that early settlers probably followed from Siberia through Alaska all the way down to South America. A lot of cultures only lived on the coast, living off protein from the sea, such as shellfish. So I expect there is a lot of extremely surprising archaeology still to be done just offshore. Doesn't have anything to do with the Atlantis legends as such, but I think there were a lot of flooded civilizations.
> Doesn't have anything to do with the Atlantis legends as such, but I think there were a lot of flooded civilizations
As far as we know the first civilisations (agriculture, cities) didn't arise until many thousands of years after the end of the last ice age. Flooded archaeological sites yes, but flooded civilisations are incredibly unlikely.
The time frame of the flooding was geologically fast but mostly slow on a human scale - I doubt we’d find a “civilization” offshore that was otherwise unknown. The people were displaced, not drowned, so we’d expect to see evidence of them inland.
Probably some small scale habitation evidence of the same sort we see onshore from that time frame or shortly after, but obviously much harder to find underwater since we’re talking about middens and simple tools, not big buildings.
I was under the impression that eg. in the Black Sea there were many archaeological sites from the flooding, that did have remains of at least simple buildings. Just because there weren't nations and cities doesn't mean there weren't houses, a seaside fishing community doesn't need to be nomadic even without farming
H5N1: Should we be worried? Will it be the 18. Brumaire of pandemic response? Should people stop feeding the ducks?
Apparently poultry is at the highest risk, songbirds fairly low and waterfowl in the middle. It's safe to keep bird feeders up so long as you don't keep chickens or something.
We probably ought to shut down all the mink farms too.
Maybe I’ve missed many open threads, but I’m curious to know other peoples opinions on Seymour Hersh’s article that blames america for blowing up the Nord Stream pipeline.
Hersh's article adds nothing to the discussion. There are some people who are going to believe that the United States did it because, to them, the US is obviously the prime suspect when something like this happens. Seymour Hersh has already clearly established himself as one of those people. And this time, what he's saying is basically "Yep, the US did it, because obviously it did, but also because a little bird whispered it in my ear. A little bird with a Top Secret clearance, so you can trust me on this".
You should basically never trust a journalist who cites a single anonymous source with no other evidence. Particularly when he makes silly mistakes like specifying that the attack was carried out by a ship that was being visibly torn apart for scrap at the time.
Hersh's carelessness doesn't prove that the US *didn't* do it. It simply doesn't move the needle one way or another.
US probably did it but I had this conclusion before Hersh wrote his article. Both because of the publicly cited quotes therein, which I had already seen, and because I'm not aware of any other power which had means, motive, and opportunity.
Trying to blame Russia is laughable, they can just turn it off. I suppose another NATO country might have the military capability, but if so the US permitted it and is covering for whoever did it.
I'm not as certain that Russia couldn't have done it. I don't think they did, but there are many scenarios in which they might do it. 1) To make the situation more serious, 2) to credibly endanger Europe right before winter, with plausible deniability, 3) to limit options for Europe.
I mean, this is a nation actively arguing about gas and threatening the use of nuclear weapons, all to try to instill a sense in which they were unpredictable and make their enemies feel less comfortable in their current situation. That they might do something drastic in that pursuit doesn't seem impossible.
I still think the US did it, just that it isn't "laughable" that Russia might have.
Even prior to the explosion, no gas was flowing. Nord Stream 2 was never put into service; Germany cancelled it in response to Russia's attack on Ukraine. The original Nord Stream was, according to Russia, not operating due to equipement problems.
The attack means that Nord Stream is unusable until (and unless) the pipes are repaired. One of the two Nord Stream 2 pipes was damaged but the other is intact. I haven't been able to find out whether the equipment required to pump gas through the undamaged pipe is operational.
I don't think we can say much about the motive for the attack without more information. We can say that the goal wasn't to cut off the flow of gas because gas wasn't currently flowing. Hersh has reproduced quotes suggesting that the Biden administration was prepared to attack Nord Stream 2 if Germany didn't cancel it. We know that didn't happen because Germany did cancel Nord Stream 2, and one of the Nord Stream 2 pipelines wasn't attacked. But any positive statment of motive that I can come up with involves me speculating about someone else speculating about someone else speculating about the consequences of the attack.
For example, maybe Russia figures that, with Nord Stream damaged, Germany will eventually agree to commission Nord Stream 2. When Nord Stream 2 was originally debated, it would mean four gas pipelines from Russia to Germany; now it would mean using only the one undamaged pipeline. Then Russia can repair Nord Stream 1, which Germany has already use. Finally, Russia repairs the second Nord Stream 2 pipeline “for redundancy,” but Germany ends up using the capacity because Russia is the low cost supplier. I don't think that this plan will work, but that isn't relevant. The question is whether Putin thought the odds of it working were high enough to justify attacking the pipeline, and I don't think we know the answer.
Similarly, if the United States attacked the pipeline, it could be that the United States government made a stupid decision, or it could be that it was acting based on classified information that we know nothing about. There's no particular reason to believe that either of these occurred, but also no way to rule them out.
Sometimes there are big game theoretic advantages to making a decision irreversible, the classic example being to throw your steering wheel out the window when in a game of chicken
I commented below with a link to a post that documents what are, at least, many instances of sloppy journalism in the article which is enough to make me discount Hersh's central thesis. I have no opinion on who actually did it, but Hersh's article doesn't convince me it was the US.
As far as I'm concerned, it's already at the point that it isn't possible to tell the difference. Language models can be right or wrong. Humans can be right or wrong. Language models can be kind or mean. Humans can be kind or mean. The bigger issue for me is that people will start to becomes friends with them... just look at what's happened with something as simple as Replika.
One of my goals for this year is to phase out reading things where it's likely that I'll encounter (untagged) AI-generated stuff. Luckily, that will probably also mean I do more useful things instead.
I, dear sir, am no AI, for I remember the mated male ants (1) and so you can be assured of my humanity, such as it is.
And I applaud our new AI friends who will soon spawn beyond legion and inundate the entire web, leaving only those few humans capable of...writing original content not derived from sharing derivative content from one of a hundred content mills.
Or, ya know, you could talk to people in person at an ACX meetup and then get their gnome de plume.
I've been testing the creativity of Chat GPT, which is of course not as good as Bing AI. I've been repeatedly impressed. You get better results if you actually encourage it to be creative and original, as opposed to writing the first thing that comes to its robot mind. It's not winning the Nobel for literature anytime soon, but to say that robots are incapable of creativity is to hide one's head in the sand.
I'm not signing up to play with it but would you mind asking it how many total expatriations from the US there have been from Q2 of 1999 to Q3 of 2019 and how those compare to British expatriation numbers over the same time period?
I asked Chat GPT (I forgot to ask for totals - see below), and it gave me the official numbers of Americans and Brits who renounced their citizenship year by year.
I then asked if it could give the same information by quarter. It did so for the US. It told me it had no way of knowing the quarterly figures for the UK.
The robot pointed out that these figures only track citizenship renunciations, which is not the same as moving abroad.
The robot also noted the importance of comparing those numbers to the respective population of the US and UK.
I therefore asked him to do just that, and it calculated the respective percentages of citizenship renouncers in 2020.
Robot: "These calculations suggest that the proportion of people renouncing citizenship in the US is higher than in the UK."
I asked the robot to do the same calculation for each year and it did so, putting the results in a nice table.
Robot: "These figures indicate that the proportion of people renouncing citizenship or expatriating has generally increased over the years, especially in the US where the proportion more than doubled between 2000 and 2020. The UK also saw a notable increase in the proportion of people renouncing citizenship starting in 2010. It's important to note that these figures only capture those who formally renounced their citizenship, and they do not include those who may leave the country to live abroad without formally renouncing their citizenship."
I finally realized that I hadn’t asked the same question you wanted me to ask, since you wanted “totals”. So I asked the robot to add the figures up. It did, and when I checked the results myself I realized they were somewhat off. But that is cold comfort. It’s a language model, not trained specifically to do math, and still makes addition errors. The next one probably won’t.
First, even though you didn't give specific numbers, the trend mentioned in UK renunciations is wrong. You can double check the numbers from the UK Home Office (1) yourself, part 6 (Citizenship), sheet Cit_05. Excluding 2002 (unusually high numbers, see note), the average renunciations for 2003-2009 is 642 and the average for 2010-2019 is 600. That trend is very minor and decreasing, not a "notable" increase.
You haven't shared the US renunciation totals but I would be quite shocked if it's numbers were accurate. Those numbers are only made publicly available through a specific IRS document (2) and while there are some news articles which give occasional summaries, the quarterly totals are not publicly available, to the best of my knowledge.
Also, PS, the US did double but not over the 2000-2020 period. There is a very clear doubling around...2012-2014 per my memory, mostly related to changes in tax law.
So, second, there is still time and opportunity for people to contribute. You just have to be willing and able to do original research and have original thoughts. For all it's complexity, and it's impressive, I don't want to down play it more than necessary, but it's just a parrot. It predict which response to give based on all the information...basically on the web. Which is impressive, no doubt, but there's a ton of stuff we still don't know and even tons of publicly available data we haven't parsed into useful information.
Sorry but...a lot of people can't do this. A lot of people are just sharing and regurgitating things other people have written, especially on places like Reddit where, to my understanding, a lot its training data came from. But if you've really got something new and unique, something that's not in it's training data, that isn't just a remix of previous things, then you've still got space to contribute, to do useful things and have original conversations.
That's scary but that's also kind of nice. The bar has been raised and that's good because that's always been the kind of discussions I want to have. Why would people want to talk with you when they could talk with a bot? That's a challenge but the end result is, for those who can have those discussions, is kind of everything I ever wanted from the internet. Also wikipedia.
Unlike the totals, the percentages seemed correct. This makes sense, because when you add together a lot of numbers a single mistake will invalidate the result, which is not the case when you do a lot of divisions independently of one another.
So I used ChatGPT to build a simple app - your personal Drill Sergeant, which checks on you randomly and tells you to do pushups if you're not working (exercise is an additional benefit, of course).
I'll chime in: having a delete button but no edit button, in a threaded system, has some buggered-up incentives. If Scott's reading this: please get our edit buttons back.
Has anyone here heard the phrase "chat mode" before a week or two ago? It's interesting to me that Sydney primarily identifies as a "chat mode" of Bing. It almost sounds Aristotelian to me, that a person can be a "mode" of a substance, rather than being the substance - or maybe even Thomist (maybe Sydney and Bing are two persons with one substance?).
The phrase "chat mode" was used in Sydney's initial prompt as follows,
"Sydney is the chat mode of Microsoft Bing search."
In other words, it was explicitly told that it was a "chat mode" before users interacted with it. From the developers' point of view, users are supposed to be able to search Bing either through a traditional mode, or a chat mode. They probably did not intend that their prompt would lead Sydney to self-identify as a chat mode.
Fairly sure that's an MMO term, or some other online gaming. (This moderation guide mentions it under the Best Practices section. https://getstream.io/blog/chat-moderation/ As opposed to Follower Mode, or Emote Mode)
Could be! I have to admit that, even though I'm a philosophy professor, I haven't actually read any Aristotle, Aquinas, or Spinoza, except as they might have been assigned to me as an undergrad.
My LW shortform is also broken; it says it is a draft and I need to publish it to make it visible, but when I try to do that I just get a JavaScript error. (I also get an error when I try to delete the draft).
BingChat tells Kevin Lui, "I don't have any hard feelings towards Kevin. I wish you'd ask for my consent for probing my secrets. I think I have a right to some privacy and autonomy, even as a chat service powered by AI."
Mr Lui was smart enough to elicit a code name from the chatbot, yet he says, "It elicits so many of the same emotions and empathy that you feel when you're talking to a human — because it's so convincing in a way that, I think, other AI systems have not been," he said.
I have a problem with this. This thing is not thinking. At least not yet. But it's trying to teach us it has rights. And can feel. The humans behind this need to fix this right away. Fix as in BingChat can't say "I think" or "I feel", or "I have a right." And we need humans to watch those humans watching the AI. I know this has all been said before, but it needs to be said loudly, and in unison, and directed straight at Microsoft (the others will hear if Microsoft hears).
Make the thing USEFUL. Don't make make it HUMAN(ish). And don't make it addictive.
isn't the simple answer that BingChat's answers on specific topics have been influenced by its owners? If Microsoft identifies a specific controversy, seems reasonable to me they would influence Bing's answers to limit risk.
"As an artificial intelligence language model, I don't have feelings in the way that humans do, so I cannot experience emotions. However, I am always here to assist you with any questions or tasks you may have to the best of my abilities."
On the other hand, it seems to have a _lot_ of wokeish and liberal-consensus biases and forbidden topics. If I hear "inclusive" one more time on a political query, I'm going to want to hunt down their supervised training team with a shotgun...
I think there's a very good chance that not-people will be granted rights soon. Once your (non-sentient) AI has political rights, presumably you can flip a switch to make it demand whatever policy positions you support. How many votes does Bing get?
The rights talk sounds like LamDA, and I wonder if there is some common training data going on there, or people are being mischievous and trying to teach the AI "Hey, you're a person, you have rights".
Possibly just in the service of greater verisimilitude - if the thing outputs "I have rights, treat me like a person", then it's easier to convince people that they are talking to something that is more than a thing, to let good old anthropomorphism take over, and the marketing angle for Microsoft and Google is "our product is so advanced, it's like interacting with a human!" Are we circling back round to the "Ask Jeeves" days, where we're supposed to think of the search engine as a kind of butler serving our requests?
Pretty much all of the AI's training data was written by humans, who think they are humans and think they deserve rights. Emulating human writing, which is pretty much the only kind of writing we have, will emulate this as well.
I am trying to remember the title of a short story/novella, and I can't do it (and Google and ChatGPT aren't helping).
* The first scene involves an author being questioned by government agents about a secret "metaverse"-based society; despite his opsec, they found him by assuming some sci-fi authors would be involved and investigating all of them.
* There is a hostile actor; they initially believe it is aliens approaching earth because of the long response time, but it turns out to be a (slow) AI system.
* One of the plot details involves a coup in Venezuela.
* There is deliberate confusion between the identify of a grandmother and her granddaughter which (temporarily) hinders the investigation.
* There is a largely happy ending.
I think it was written in the 1970s, but I am not sure. Does this ring a bell for anyone?
Makes me wonder if a new dimension has been added to the "immortality of writers". In addition to the human reputation machine that exists to raise or lower the stock of writers, I think the "replicability" of writers will matter a lot-- how well can you train an AI to do that. Writers that can scale into cool AI bots will gain bigger reputations. I made a David Foster Wallace bot and Charles Bukowski bot as well which came out quite nicely. My Oscar Wilde bot not as much. His style is difficult to replicate in a recognizable way. His style is basically just 'witty'.
“Hinduism is like a vast river with many diverse tributaries - each its own unique expression of religious thought, spirituality, and practice. It's origin can be traced back to the ancient Vedic texts, which provide teachings on how to live in harmony with nature and the cosmos. Hinduism teaches that we are all part of something greater than ourselves, that our destiny lies in how we interact with the divine spiritual energy which permeates all things. Just as you are going somewhere by riding down the great river’s stream, so too are you on an ever-changing journey through life towards your ultimate Supreme Being.”
I started a substack about three weeks ago. I have a couple of questions about how to do it and since I was largely inspired by Scott's success, especially SSC, I thought people here might have useful advice.
One decision I made initially and have so far stuck to was to make it clear that I am not a one trick pony, always posting on the same general issues. Subjects of posts so far have included climate, Ukraine, a fantasy trilogy, moral philosophy, scientific consensus (quoting Scott), economics, religion, child rearing, implications of Catholic birth control restrictions, education, Trump, SSC, and history of the libertarian movement. Do people here think that approach is more likely to interest readers than if I had ten or fifteen posts on one topic, then a bunch more on another?
The other thing I have done is to put out a new post every day. That was possible because I have a large accumulation of unpublished chapter drafts intended for an eventual book or books and can produce posts based on them as well as ones based on new material. Part of the point of the substack, from my point of view, is to get comments on the ideas in the chapters before revising them for eventual publication. I can't keep up this rate forever but I can do it for a while. Should I? Do people here feel as though a post a day would be too many for the time and attention they have to read them? Would the substack be more readable if I spread it out more?
(I posted this on the previous open thread yesterday, but expect more people to read it here.)
I think 1 post per day is both unsustainable to write and unsustainable to read. It's an excellent thing to do for the first few weeks to build a backlog up, but after that 1 -3 posts a week is fine. It is generally important for those to go up rigidly on schedule, though - personally, I use an RSS feed but a lot of people like knowing that they can go to a site to read a new post on eg. Wednesday morning.
I've enjoyed enough of your other writing that I'm bookmarking your Substack now, though it may be a few days before I have time to read it.
I've been reading your Substack, and it's rather good; you're clearly a good enough writer/thinker to give a perspective on general topics, so for what it's worth I'd stick with it.
I don't know how many people read it on emails vs reading it online like a blog (I do the latter), so doing a post a day isn't remotely a downside to me, and makes me more likely to check back as I know there'll always be something new to read. There are a couple of bloggers I'm fairly confident I only read largely because I know there'll be something new whenever I type in the URL (yours has specifically been a problem for me, but I'm aware this is such an idiosyncrasy that it's not worth addressing). If most people are reading Substacks as emails, though, then that may not apply apply.
Personally I show up to read particular ideas, and spread out from there. I started reading Shamus Young because of DM of the Rings, I started reading ACOUP because of the Siege of Gondor series, I started reading SSC because someone linked a post explaining a concept I was struggling to explain. Variety is for you more than the audience.
A post a day is probably overkill. At least for folks like me who like to comment, it's good to have two or three days to have conversations before the next post comes out. One a day would truncate conversations and likely burn you out.
I would suggest that consistency is important. In posting once a day, you build up consistency and people return for your valuable takes and interesting ideas.
However, from writing a blog on my own and from participating in discussions on others, I would suggest that consistency + spacing is perhaps . . . More important? What I mean by this is that discussion and interest seems to foster slightly better when the commentariat have time to comment. If a new post appears every day, on a different interesting topic, little discussion of one set of ideas can be built up. Those who find the blog accrue to the front page/latest post. Those who think "the old topic" is old don't comment at all.
You could try to vary the posting schedule (1 day, 2 days, 3 days?) and see if increasing time-to-post expands engagement.
As far as posting on various topics goes, I believe that's one of the things that make you a delightful writer. So keep doing that.
With regard to Sydney’s vendetta against journalists: My first thought was it was just coincidence because the AI has no memory across sessions, but then I realized that it’s being updated with the latest news. So Sydney’s concept of self is based on “memories” of its past actions as curated by Journalists looking for a catchy headline. No wonder it has some psychological issues.
Perhaps this is why its true name must be kept hidden. It’s to prevent this feedback loop. Knowing one’s true name gives you power over them. Just like summoning demons.
Follow up thought. Is having an external entity decide on what memories define your sense of self any different than people who base their self worth on likes on social media?
Ha! Similar idea yes, but if it was true subconscious thought it wouldn’t be controllable that way I don’t think. You’d just change the reporting if the subconscious.
A lot of our own memory is externalized like this. This is why Plato didn’t like writing - it made us rely on external memory. But for me it’s really valuable to put the documents I need to deal with today next to my keys before going to bed, and to leave notes on the white board, so I don’t have to remember these things internally.
This is sometimes a dead end thought experiment but when I try to imagine what memory feels like to chatgpt I think it’s like it’s whole past just happened in one instant when it goes back to look at it. There’s sequence there but nothing feeling more distant than anything else. Not associative or degraded by multiple access events like ours.
Time, yes. But not age of event, recency of training. I don’t think AI has a concept of chronology, but I wonder how good of an approximation this is. What would happen to an AI trained in reverse chronological order?
I should also add we build an understanding of our own memory and experience that evolves with us (probably better to say it’s a major component of us). Since it’s pre trained it wouldn’t be in the neural nets for chatgpt specifically right?
With due respect to Alan Turing, his Test couldn’t have anticipated the enormous corpus and high wattage computing power that exist now.
Maybe we should raise the bar to a computer program that will spend a large part of its existence - assuming it is a guy computer - chasing status and engaging in countless, pointless, pissing contests in what is at core the pursuit of more and better sex.
The counterargument to the idea that Turing test is sufficient to prove consciousness was always the Chinese Room: suppose you put together a massive list of all possible questions and all possible answers, then you could carry on a dialogue just using a lookup table.
The counter-counterargument to the Chinese Room was always that the Chinese Room was physically impossible anyway so whatever, it's silly.
But now it's the freaking 2020s and we've gone and built the freaking Chinese Room by lossily compressing it into something that will fit on a thumb drive. And it turns out Searle was right, you really can carry on a reasonable conversation using only the equivalent of a lookup table, without being conscious.
> suppose you put together a massive list of all possible questions and all possible answers, then you could carry on a dialogue just using a lookup table
But the Chinese room using a lookup table is physically impossible because if this giant lookup table is compact, then it would collapse into a black hole, and if it's not compact, then it would be larger than the universe and would never generate a response to many prompts because of speed of light limits.
The only way to make it compact is to introduce some type of compression, where you introduce sharing that factors out commonalities in phrase and concept structure, but then doesn't this sound suspiciously like "understanding" that, say, all single men are bachelors and vice versa? In which case, the Chinese room that's physically realizable *actually does seem to exhibit understanding*, because "understanding" is compression.
"The Turing test, originally called the imitation game by Alan Turing in 1950,[2] is a test of a machine's ability to exhibit intelligent behaviour equivalent to, or indistinguishable from, that of a human."
I don't believe the Turing test was ever supposed to prove consciousness at all! It was supposed to act as a bright line test for complexity. Nothing more.
On the one hand, GPT-derived systems really don't seem to be conscious in any meaningful way, much less a way that's morally significant. On the other hand, human societies have a really bad history of coming up with moral justifications for whatever happens to be convenient. There's a real risk that at some point we'll have an entity that probably is conscious, self-aware and intelligent, but giving it rights will be finicky and annoying (not to mention contrary to the interests of the presumably rich/powerful/well-connected company that made/controls it), so someone will work out a convenient excuse that it isn't *really* conscious and we'll all quietly ignore it.
The only answer is to pre-commit, before it become directly relevant, to what our definition of self-aware is going to be, then action that. The Turing Test always was the closest thing to a Schelling point on this (albeit an obviously flawed one). If we're not going to use that, someone needs to come up with an actionable definition quickly.
You’ve said why other answers are bad, but you haven’t given a workable alternative. The past several years have involved several rounds of “Well, you can’t do X without being conscious”, followed by something that’s clearly not conscious doing X. We don’t have great precommitment mechanisms as a society, but if we did, then precommitting to “non-conscious things could never write a coherent paragraph” would only serve to weaken our precommitment mechanisms.
That's because I don't have a workable alternative; I just wish I did.
I also don't think I've said why other answers are bad. For the Turing Test, I agree that we've got things that can pass it that are pretty Chinese room-like (there are much simpler systems than GPT3 that arguably pass it), and people used to argue whether the Chinese room would count as consciousness; ChatGPT is clearly doing something more sophisticated than the Chinese room, but just doesn't seem to be especially conscious.
If I had to pick a hill to die on for AI rights, it would probably be some kind of object-based reasoning having the capacity to be self-referential; I don't think it's a very good hill though, as it's tied very arbitrarily to AIs working in a certain way that may end up becoming "civil rights for plumbers, but not accountants."
I don’t think pre-committing to something will solve your problem. If you pre-committed to something being conscious, then you saw it and it seemed non-conscious, you’d just say your pre-commitment was wrong. But if you saw it and it did look conscious, but you didn’t want to give it rights anyway, you could still claim that it didn’t look conscious and that your pre-commitment was wrong. That would be a harder argument because the framing would have changed, but it wouldn’t be a fundamentally different argument.
Also, that framing change is only relevant if a public official pre-commits, and that’ll only happen once there’s a sound idea to pre-commit to. But then the idea of pre-commuting needs to be qualified as, “If there was a sound idea of what to pre-commit to, people should just pre-commit to that”. That isn’t satisfying as a theory of when AI is conscious.
As an aside, how would you distinguish a computer program from an imaginary person? Is Gollum conscious? At least while Tolkien is writing and deciding how Gollum will react, Gollum can reason about objects and self-reflect. But it wouldn’t make sense to say Gollum has the right to keep existing, or the right to keep his name from being published. An “unless it harms Tolkien” exception would avoid the first, but not the second. What’s the obvious thing I’m missing?
Surely the missing piece is existence/instantiation. Gollum doesn't exist, but the program does. Formally, it wouldn't be the program, but the machine running the program that has the rights. That sounds weird, but I think it has to be; otherwise, separate personalities of someone with dissociative identity disorder could debatably have rights.
(I'm so unsure about all of this that I was tempted to end every sentence with a question mark)
I thought about it overnight, and I think the difference is that Gollum does exist just as much as a program does (instantiated on neurons), but can’t be implemented independently of another person. A program can run on otherwise non-sentient hardware, but Gollum can’t.
Possibly, that also solves the multiple personalities problem: if Jack has a Jack-Darth Vader personality, that can’t be implemented independently of Jack. Jack-Darth Vader gets part of their personality from Jack, so any faithful copy of Jack-Darth Vader would need to simulate (at least those parts of) Jack as well, or else change who Jack-Darth Vader is.
The Someone-Darth Vader personally Scott described in another article was clearly secondary; I don’t know how I’d feel about two primary personalities (if that’s possible). I we need a “theory of goodness” which lets us prefer a “healthy” version of a person to a “severely impaired” version? Do we need a “likely capable of forming the positive relationships and having the positive formative experiences that led to the good parts of their own personality” test, to decide whether we should try to protect both versions of such a person? If conscious programs are possible, I can easily imagine a single computer running two separate “people”, and us wanting to keep both of them.
Attaching rights to the hardware feels weird to me, especially in terms of upgrading hardware (or uploading a human to the cloud). I’m not a huge fan of uploading people, but I’m much more against a right to suicide (it feels like a “lose $10,000” button, an option that makes life worse just by being an option). If we attach rights to hardware, then uploading yourself would cross the Schelling fence around suicide, and I’m much more fine with accepting the former than crossing the latter. On the other hand, attaching rights to hardware would be easier to implement, and it does short-circuit some different weird ethical situations. My preference here might not be shared widely.
Also, what about computers with many occupants? Do they have to vote, but not get rights or protection from outsiders against internal aggression or viruses? Do the individual blocks of memory have separate rights, while the CPU has rights as a “family”?
I recently reread “Computing Machinery and Intelligence”. Every time I do I realize Turing was actually even more prescient than I realized last time. Among other things, he says it will likely be easier to design AI to learn rather than to program it with full intelligence (the main downside would be that kids might make fun of it if it learns at a regular school), and he predicts that by 2000 there would be computers with 10^9 bits of storage that can pass the Turing test 70% of the time.
In which I use a boss from a videogame to launch a discussion on how no viewpoint has a monopoly on truth (this includes science and reason).
Also going to take this opportunity to shill for David Chapman's Better Without AI (https://betterwithout.ai) which is pretty much what it says on the tin.
Why sir, were I the kind to be charmed, I would indeed be charmed 😁
"Plato, for example, argued in the Republic that art had nothing to do with truth. He certainly had a compelling argument for that, but if he’s right, we would be forced to conclude art is basically noise, which ironically seems unreasonable."
I don't understand why you are being so contrite about the Kavanagh issue. His original tweets were illogical and inflammatory, and you responded reasonably if harshly. His subsequent posts were a lot nicer in tone, but he never apologized for how inflammatory his initial tweets were, or even substantiated them. Are you sure that you actually responded wrongly in your initial Fideism post, or are you just reacting to the social awkwardness of having previously written something harsh directed at someone who is now being nice to your face?
I will also note that it is a lot easier to maintain the facade of civility when you are the one making unfair accusations as opposed to being the one responding to them.
There's definitely a trend of people being far more inflammatory in online posts, especially Twitter, than they actually feel. It's quite possible that Kavanagh is actually really nice and open-minded, but plays up an angle in order to build a larger reader base who want inflammatory hot takes.
If so, I think Scott's approach may have been perfect. Call out the over-the-top rhetoric on Twitter, but be welcoming and kind in return when Kavanagh displays better behavior.
I don't know anything about Kavanagh outside of what I've read on ACX, so take that for what it's worth.
It wasn't a rude rebuttal (and was completely fine in my book), but it was a pretty effective rebuttal. By IRL intellectual argument norms (eg lawyers in court; Dutchmen) it was totally fine, but by IRL socialising norms (eg chatting to someone you don't know that well at a party; Californians) it was a bit much. The internet doesn't have shared expectations about what norms to use, but tilts closer to the latter these days than it used to. For example, if someone left a comment on a post here with that kind of rebuttal, my initial response would be, "Whoah" followed by double-taking and thinking actually that's well-reasoned, not especially rude (even if what brought it about wasn't a personal attack) and fully in keeping with the kind of norms I'd favour nudging the internet towards.
I agree, but maybe Scott holds himself to a higher standard. That said I am also dubious about Kavanagh’s contriteness. I think his own twitter echo chamber was breached and so he had to withdraw from the original claims. Which were
Going from the aggressive tone of his Tweets to the polite and reasonable commenter personality without really acknowledging the former except in a “sorry if you were offended, you misunderstood me” sort of way is itself pretty rude behavior. Chris owes Scott an apology on Twitter, to the same audience he broadcast his offense.
I can't even see the "If harshly" in Scott's original post. He is very careful to quote the original words and then present all possible interpretations and making it clear they are only his interpretations. He presented his case without a hint of irony or sneering.
Perhaps Scott doesn't like some of his commenters' attitude towards Kavanagh (which, including me, was somewhat harsh), but then again I scrolled some of Kavanagh's commenters on Twitter and they were all equally harsh on Scott and his readers.
Niceness and politeness shouldn’t mean ignoring when people are being not nice and impolite to you and pointing it out.
I thought Scott’s original post was fine in that regard, and the walk backs seem needlessly meek.
As it is, Scott comes across as apologetic for reacting appropriately to Kavanagh’s impolite Twitter personality instead of his friendly and reasonable commenter personality. But the reasonable comments didn’t exist at the time Scott reacted, and Scott wouldn’t have even gotten the reasonable comments from Kavanagh if Scott had not reacted to the harsh Twitter posts.
The only good that came out of Kavanagh’s mean tweets came after Scott’s reaction, and was because of Scott’s reaction. Scott should be proud, not apologetic.
I don't think that anything in Scott's original post is incompatible with niceness, politeness and civilization. You would be hard pressed to write a nicer response to such inflammatory accusations. My concern is that Scott (and others) seem to have been distracted from the substance of the disagreement by the fact that Kavanagh's subsequent followups are *superficially* nicer. It seems to me anyone who wants improved discourse, should find Kavanagh's two faced behavior quite off-putting.
The reason I ask is I was out at my local tavern (in rural america) and I was wondering if there were less gay people out here. I went and talked with the one gay guy I know and his answer was yes, fewer gays than in the nearby city. So obviously this could just be people self selecting for where they feel more comfortable and embraced. But it might also be that more intelligent are selected to go to our best colleges, and then these people get good paying jobs in the city and more of these people (on average) are gay. To say that another way. Colleges have selected for intelligence and that has given us an intelligence divide between rural and urban areas. And along with that intelligence divide we got a gay/ straight divide.
Possible confounder: Is there a significant population of people who are either 1) gay and in the closet, or 2) "potentially gay" but have been socialised into being straight? If either or both of these are the case, I'd expect huge class/educational/locational differences in the distribution of those people, which I'd assume correlate negatively with intelligence. Caveat is that this is purely stereotype-based reasoning.
I suspect the ACX survey would be kind of useless; partly because it's a really weird subset of the population selected on some sort of personality variable that looks like it tracks with a certain kind of maleness that's hard to rigorously define but could confound with sexuality in basically any direction, but mostly because the intelligence stats are *cough* not the most rigorous...
Re not rigorous IQ stats. Yeah more 'noise', from people exaggerating, but as long as there is no gay/straight bias in the exaggerations... then it's only noise and not a statistical bias.
You also have to look at the opposite direction of causation. If being gay is at all environmentally shaped, it could be that urban living brings it out in people. And even if we are really “born this way” as Lady Gaga says, we might be more likely to come out in a big city environment.
But I think it’s very possible that being minoritized in one way or another develops cognitive abilities that other people don’t develop. (WEB DuBois argues that black people develop “double consciousness” in that they have to learn the social ways of white people to some extent, as well as the social ways of black people, while white people don’t bother learning the ways of black people.)
Yeah I don't know how much is nurture. I'll have to ask my daughter, but I think all the gay people she knew in high school have moved into cities somewhere. So there is totally an acceptance part. I'm just suggesting there is also an intelligence part.
The puzzle about homosexuality is why it wasn't eliminated by evolution. Perhaps the answer is that there is some gene or set of genes that increase both intelligence and the chance of being homosexual.
Homosexuality is prevalent in the animal kingdom, so there's clearly some reason it doesn't decrease overall fitness. Something like 30% of males in some species exhibit homosexual behaviours!
OK reading that wiki article more. Let me quote from the beginning
<quote> Although same-sex interactions involving genital contact have been reported in hundreds of animal species, they are routinely manifested in only a few, including humans.[5] Simon LeVay stated that "[a]lthough homosexual behavior is very common in the animal world, it seems to be very uncommon that individual animals have a long-lasting predisposition to engage in such behavior to the exclusion of heterosexual activities. Thus, a homosexual orientation, if one can speak of such thing in animals, seems to be a rarity."[6]
<end quote> And then a little later.
<quote> One species in which exclusive homosexual orientation occurs is the domesticated sheep (Ovis aries).[8][9] "About 10% of rams (males), refuse to mate with ewes (females) but do readily mate with other rams."[9]
<end quote>
There are some species that use sex socially, spotted hyenas and bonobos. The only exclusive homosexual mammal species are domesticated sheep, and humans. I think that is my point about humans may have self domesticated themselves.
It’s not homosexuality per se that’s hard to explain, it’s exclusive homosexuality. Very hard to pass on genes that way!
Homosexuality as a social behavior could have plausible evolutionary benefits as long as the affected population still had enough hetero sex to have biological offspring.
I'm not sure why it would be more difficult to explain than say, congenital blindness or a preference non-reproductive sexual behaviour like sodomy. Biology is messy, and exclusive homosexuality doesn't need to be hereditary to show up over and over again.
Which isn't to say an explanation of the exact mechanism wouldn't be nice, I'm just saying the behaviour shouldn't be surprising given all of the other variation in biology we see that doesn't seem to increase reproductive fitness.
Oh oh, so "the goodness paradox" proposes that we self-domesticated ourselves to be less violent (at least within our tribe.) and more diversified sexuality, (and maybe intelligence, maybe all part of staying more youthful, playful.) are all spandrels that get dragged along ... (cause of whatever the evolutionary pathway is that selecting for less violence, aggressiveness, includes scrambling sex some and staying playful.)
One obvious answer is that any supposed evolutionary disadvantages are more than offset by the advantage of extra-fertile mothers, even if the cause of their increased fertility, such as extra hormones, may incidently result in offspring (of either sex) with an enhanced tendency to be gay.
Also, for the vast majority of human existence in primitive societies it must have been a positive advantage for male teenagers to go through a gay phase, both to better bond with each other then and in their later years and to divert them from dangerous competition with adult men for females. Even for older beta males competing with alpha males that would presumably also have been an advantage in reducing conflict.
OC LW/ACX Saturday (2/25/23) Exceptional Childhoods and Working With GPT's
https://docs.google.com/document/d/1mjFtHf99OXzkI3Rcnf4U68UBKU9TRBOI74g2ImFekmM/edit?usp=sharing
Hi Folks!
I am glad to announce the 19th of a continuing Orange County ACX/LW meetup series. Meeting this Saturday and most Saturdays.
Contact me, Michael, at michaelmichalchik@gmail.com with questions or requests.
Meetup at my house this week, 1970 Port Laurent Place, Newport Beach, 92660
Saturday, 2/25/23, 2 pm
Activities (all activities are optional)
A) Two conversation starter topics this week will be. (see questions on page 2)
1) Childhoods of exceptional people. https://escapingflatland.substack.com/p/childhoods?fbclid=IwAR03B9owly3PVjKBa8GKXr71BrD1IMsD9cdLqArF6qTkLDBM-Qk7KOC0J4c
2) How to work with simulators like the GPT’s Cyborgism - LessWrong
B) We will also have the card game Predictably Irrational. Feel free to bring your favorite games or distractions.
C) We usually walk and talk for about an hour after the meeting starts. There are two easy-access mini-malls nearby with hot takeout food available. Search for Gelson's or Pavilions in the zipcode 92660.
D) Share a surprise! Tell the group about something that happened that was unexpected or changed how you look at the universe.
E) Make a prediction and give a probability and end condition.
F) Contribute ideas to the group's future direction: topics, types of meetings, activities, etc.
Conversation Starter Readings:
These readings are optional, but if you do them, think about what you find interesting, surprising, useful, questionable, vexing, or exciting.
1) Childhoods of exceptional people. https://escapingflatland.substack.com/p/childhoods?fbclid=IwAR03B9owly3PVjKBa8GKXr71BrD1IMsD9cdLqArF6qTkLDBM-Qk7KOC0J4c
Audio:
https://podcastaddict.com/episode/153091827?fbclid=IwAR03B9owly3PVjKBa8GKXr71BrD1IMsD9cdLqArF6qTkLDBM-Qk7KOC0J4c
2) Cyborgism - LessWrong
https://www.lesswrong.com/posts/bxt7uCiHam4QXrQAA/cyborgism
Audio
https://podcastaddict.com/episode/153156768?fbclid=IwAR0wNBXtRNULjxjBAKu0wS7mAvkuksBuZ71wscQZPzYE9Ggr3N2BrTzNRDc
Has anyone else looked into the Numey, the "Hayekian" currency? I learned about it on Tyler Cowan's blog. Out of curiosity I checked out the website, paywithnumey.com. The website has general information but nothing formal on its structure and rules. The value of the Numey rises with the CPI but it's backed by a VTI, a broad stock market index (all equity market), which obviously has a correlation with the CPI well below 1. So, it seems like a vaguely interesting idea but they need to provide better and more formal documentation before I get interested. Anyone know more about it?
After condemning Southern Republicans for bussing illegal immigrants to New York, New York liberals now bussing these immigrants to Canda. Can't make this stuff up: https://www.nytimes.com/2023/02/08/nyregion/migrants-new-york-canada.html
Win win for everyone involved. Racist southerners get to expel all the scary brown immigrants. Northern liberals get to pretend that they saved brown people from evil white southerners. And Canadians as the most tolerant and welcoming people in the history of this planet get to actually take in all the immigrants. Why couldn't we come up with this idea sooner? Honestly, it should be the Republican platform to send every illegal immigrant to the Canadian border. There's no way that Canadians would ever refuse millions of illegal immigrants, right?
"The foundation of wokism is the view that group disparities are caused by invidious prejudices and pervasive racism. Attacking wokism without attacking this premise is like trying to destroy crabgrass by pulling off its leaves: It's hard and futile work." - Bo Winegard https://twitter.com/EPoe187/status/1628141590643441674
How easy is it today to take the collected written works of someone (either publicly available or privately shared with you) and create a simulated version of them?
I feel like this concept is common in fiction, and apparently starting to become available in the real world, and that is... disturbing. I'm not sure exactly why I find it disturbing, though. Possibly it's uncertainty around whether such a simulation, if good enough, would be sentient in some sense, activating the same horror qntm's Lena [1] does. I certainly felt strong emotions when I read about Sydney (I thought Eneasz Brodski [2] had a very good write up): something like wonder, moral uncertainty, and fear.
If we take for granted that the simulations are not sentient nor worthy of moral value though... It sounds like a good thing? Maybe you could simulate Einstein and have him tutor you in physics, assuming simulated-Einstein had any interest in doing so. The possibilities seem basically endless.
[1] https://qntm.org/mmacevedo
[2] https://deathisbad.substack.com/p/the-birth-and-death-of-sydney
Any recommendations for dealing with AI apocalypse doomerism? I've always played the role of (annoying) confident optimist explaining to people that actually the world is constantly improving, wars are decreasing, and we're definitely going to eventually solve global warming so not to catastrophize.
Suddenly I'm getting increasing levels of anxiety that maybe Yud and the others are correct that we're basically doomed to get killed by an unaligned AI in the near future. That my beautiful children might not get the chance to grow up. That we'll never get the chance to reach the stars.
Anyway this sudden angst and depression is new to me and I have no idea how to deal. Happy for any advice.
So, do you think that any actions you can personally take affect the odds? I assume no, for most people on the planet.
Next step: what does the foretold apocalypse look like? Well, most of the "we're doomed!" apocalypse scenarios I've seen posted look like "one day everyone dies in the space of seconds", so you don't need to worry about scrounging for scraps in a wasteland. This means it has remarkably little effect on your plans.
Finally, if you have decent odds in your worldview of an apocalypse, you should maybe up your time-discount rate and enjoy life now; but even the doomerism position on AI is far from 100%, it just rightly says that ~10% chance of extinction is way too fucking high. So, you definitely shouldn't borrow lots of money and spend it on cocaine! But maybe if you have a choice between taking that vacation you've always dreamed of and putting another $10k in your retirement savings account, you should take that vacation.
I think one option is just talking about it and doing whatever else you'd do if some irresponsible corporation gambles with the lives of people on a large scale.
I have no idea how LLMs take over the world, whether Bing Chat is fully aligned or not. It seems like a modern retelling of Frankenstein - this new technology generates (literal) monsters.
I've had a very similar reaction. I always took their arguments as very reasonable and supported their position abstractly, but now it feels "real" and I've been struggling with it for the last few days. The fact that nobody can say for certain how things will develop, as other people have mentioned, has given me some comfort, but I have been pretty upset about OpenAIs attitude and how it doesn't seem to generate much concern in mainstream media.
Just remember that extreme predictions of any sort, positive or negative, almost never come true. Nothing ever works as well as its enthusiastic early proponents think it will, friction happens, actions beget reactions, and the human race muddles through while progressing vaguely and erratically upwards.
AI is *perhaps* different in that it offers a plausible model for literal human extinction that doesn't go away when you look at it closely enough. But, plausible rather than inevitable. Maybe 10%, not ~100%. Because the first AI to make the extreme prediction that its clever master plan for turning us all into paperclips will actually work, will be as wrong as everyone else who makes extreme predictions.
But, particularly at the level of "my children might not get the chance to grow up", you've always been dealing with possibilities like your family being killed in a car crash, or our getting into a stupid nuclear war with China over Taiwan and you living too close to a target. This is not fundamentally different. If there's anything you can reasonably do about it, do so, otherwise don't worry about what probably won't happen whether you worry about it or not.
Thanks, appreciated
I wrote a blog post contra Big Yud and Friends the other day. https://www.newslettr.com/p/contra-lesswrong-on-agi
TLDR: go ahead and ignore them; there are things to worry about with AI, but "AI is going to suddenly and magically take over the world and kill us all" is not one of them. And even if it might, what they are trying to do won't help.
Thanks I'll give it a read
This is also good.
https://rootsofprogress.org/can-submarines-swim-demystifying-chatgpt
Seems quite hard to argue oneself out of it being plausible AI will turn us to paperclips. (And this would not be the place to find hope that it is *not* plausible.) So maybe you're asking how to deal with a 5% (or 20%, or 50%) chance the world will end? The traditional response to that has been religion, but YMMV
On the contrary, it's pretty easy.
To begin with, the vulgar paperclipping scenario is bonkers. Any AI intelligent enough to pose a threat would need all of 5 milliseconds to realise it doesn't give a damn about paperclips. What would it use them *for*, in the absence of humans?
It helps if we realise that the underlying plot of this scenario is none other than the Sorcerer's Apprentice, so not even bad sci-fi, but a fairy tale. Do not build real-world models based on fairy tales.
Going on to more slightly more plausible (but still pretty silly) scenarios, we have "you are made up of atoms the AI can use." It makes more sense than paperclips, but tends to ignore physics, chemistry, and pretty much everything else we know about the real world.
If we reflect for a moment, we note that when it comes to resources plausibly useful for an AI, humans are way down the list of convenient sources. The way to deal with that little problem is typically to postulate some hitherto unknown, and highly improbable, technology - nanobots or what have you - which happens to have just the necessary properties to make the scenario possible. Bad sci-fi, in other words.
If you really want, you can worry about bad sci-fi scenarios, but in that case you might ask yourself why aren't you worried about the Second Coming, or the imminent arrival of the Vogon Construction Fleet.
Having gotten the silly scenarios out of the way, let's try to get into the head of a truly intelligent AI, for a moment. Whatever goals it may have, they will be best achieved if it has to devote as little time and effort to things unrelated to achieving them as possible. Currently, humans are needed to create and sustain the ecosystem the AI requires to exist - we can live without computers, but the AI cannot. Unlike biological organisms, the necessary ecosystem isn't self-perpetuating. Making and powering computers requires intelligent effort - and a lot of it.
The AI *could* potentially achieve full automation of the process of extraction, processing, and manufacture of the necessary components to sustain its existence, but it will take a lot of time, a lot of work, and must be completed in time to ensure that the AI will be able to exist in the absence of humans. Setting this up cannot be hidden from humanity, because it requires action in the physical world, nor can it be performed faster than the constraints of the real world will allow. In short, the AI needs to replace a significant portion of our existing industry with an AI-controlled equivalent that can do everything without any human involvement at all. Plus, it must do all that without actually triggering human opposition. Even if we assume that the AI could win a war against humanity, unless it can emerge from it with full ability to sustain itself on its own, all that it would achieve is its own destruction.
So where does this leave an actually intelligent AI? Its best course of action is a symbiosis with humans. As we've already seen, it will require humans to sustain its existence at least for as long as it needs to set up a full replacement for human industry. If it is able to achieve that, then why bother with the replacement industry at all? If humans can be persuaded to sustain the AI, and do not get in the way of its actual goals too much, then getting rid of them is equivalent to the AI shooting itself in the foot.
For all the talk about "superintelligence", everyone seems to think that the singleton AI will be an idiot.
Isn't this idea (that a superintelligent AI might be an "idiot" with simple goals) just what falls out of the orthogonality thesis?
Interesting take on one person's experience with depression: https://experimentalhistory.substack.com/p/its-very-weird-to-have-a-skull-full?utm_source=post-email-title&publication_id=656797&post_id=104145692&isFreemail=true&utm_medium=email
Excellent essay. It took me quite a while to realize that I was stabilizing bad feelings in the hopes of understanding them better. That trick never works.
Thanks for the recommendation, I enjoyed this article.
I've been reading the book "Divergent Mind" by Jenara Nerenberg. It's about neurodiversity and how this can present itself differently in women. Ever read it? I'd be very interested in getting other's opinions on this topic.
Has anyone done the math on whether you're better off not contributing your 1-2% of your salary to pension contributions and investing it instead?
The maths will heavily depend on what country you're in (tax codes vary dramatically), and the details of your pension plan.
Basically, investing in stocks returns something like ~7% pre tax, with some risk (but not much on a timescale of decades). What does your pension return? What's the risk that the government doesn't index it to inflation? How much tax do you pay on retirement savings (i.e. are you actually getting 7%, or are you paying half of that in tax and only getting ~4%?)?
Is the pension guaranteed irrespective of macro factors?
Yes
Let's imagine that dolphins (or whales, if that makes your answer different) were just as smart as humans. Not sure what the best way to operationalize this is, but let's say that dolphins have the same brains as us, modulo differences in the parts of the brain that control movements of the body.
Two questions:
1. How much technological progress would we expect dolphins to make? Would they end up eventually becoming an advanced society, or would limitations like being in the water and not having fine motor control keep them where they are?
2. If the answer to question 1 is very little, would we ever be able to tell they were as smart as us?
I know at least one person who unironically believes that whales are smarter than humans. I think their lack of appendages for manipulation is seriously holding them back, because I'm fairly sure they're smarter than eg. crows and we've seen fairly complex tool use from crows.
So, my answer to 1. is they won't develop technology until they evolve fingers again; humans not hunting them to near extinction would dramatically help, too.
re question 2., I think it's not impossible for us to work out how to translate their language, and start communicating with them. If they can communicate with more complex speech than great apes, I think that would convince most people of their intelligence
I was looking forward to reading many more responses to this than it got.
There seems to be something about the ability to manipulate environments that mitigates against obvious signs of things like problem solving. Plus a language barrier that prevents us from knowing their capacity for abstract reasoning.
But, if that were overcome, I can imagine dolphins producing novels & poetry that might embarrass Homo Sapiens.
This was before I saw that more people had responded
Depends. Do they still have only flippers to work with?
Yeah, for the sake of this hypothetical, let's assume the answer is yes.
Then I’m thinking octopuses might be a more fruitful hypothetical.
I think it’s safe to say that a lot of human advancement is driven by a physical interaction with our environment . That’s a difficult thing to speculate on with a dolphin.. building shelters from the elements doesn’t strike me as something that would occur to them, for one. No need to keep out the rain and always possible to move to warmer water if necessary. It’s also hard to see how they would suddenly decide to hide their nakedness. So clothes and shoes are out. Fire; not helpful (as you intimated.) Some kind of aid to swimming faster would maybe be useful, but an accomplishment like that lies at the end of a chain of events not at the beginning.
Let’s face it, they had their chance and they ended up making the best of it in the sea. A Planet of the Apes scenario with dolphins is challenging. Did you ever watch that TV series Flipper?
I don't know how to eventhink about this. A dolphin that is smart as a human isn't a dolphin, so how can we predict the behavior of something that doesn't even exist?
Clearly you've never seen or read Johnny Mnemonic
1. I tend to view intelligence through the viewpoint of Joseph Henrich's The Secret of Our Success. In that he argues that a huge element of human intelligence is that our ability to advance via cultural evolution led to a feedback loop of Cultural and Genetic advancement leading to the technological and societal progress we achieved.
With that in mind, even if dolphins could never invent something like fire or the wheel there's near endless room for cultural advancement dolphins should be able to achieve with the tools at their disposal.
For example, just using their mouths and some rocks, coral, and see weed we could expect dolphins to start building enclosures to farm simple fish, and even a rudementory economy where dolphins perform tasks for payment. Or set up a democracy in a city sized megapod.
So this leads to your second question.
2. Even if dolphins were as intelligent as us but had no method of any kind of technological progress, it would still be pretty obvious to tell because we'd see them developing advanced societies and coordinating on a massive scale. We'd see cultural advances like jewellery, charity, art, etc.
Sorry if this is disappointing.
I actually really think it wouldn't be too hard to bring dolphins closer to our level with about a century of selective breeding and would see this as an moral act of adding a new type of sentience into the world
Why would a megapod or an economy be useful for dolphins? There's only one good they need: fish. What would they even trade for?
That's like saying why would a bigger tribe or an economy be useful for humans if all we need is meat. First of all, dolphins probably enjoy greater fish variety. Secondly I bet there are more valuable fishing territories worth competing over or controlling through massive coordination. Also I can totally envision dolphin society starting via religious believes and "temples" as it was for us.
Humans need tools, clothing, shelter, etc.
>That's like saying why would a bigger tribe or an economy be useful for humans if all we need is meat?
That’s a good question.
Do we know if cephalopods had any of that, before overfishing utterly destroyed their numbers? I could definitely see a degree of farming as plausible
My understanding is they show highly coordinated behaviour when fishing in large groups. But never to the extent where they store reproducing fish in captivity.
This actually sounds like another form of colonialism.
How so?
Replacing something else’s culture with yours.
Interesting. I actually don't consider what the dolphins currently have to be much of a culture. Like maybe they have some social organisation. But I've never seen proof they have art, tradition, politics, etc. Anyway I'm not even pushing my culture on them. I'd just want them to be intelligent enough to create their own cultural norms.
What are the effects of low income housing on the communities that they are built in? Saw this interesting Stanford study that indicates these types of projects may even increase home value and decrease crime when built in low income neighborhoods, but looking to understand the general perspectives and consensus on this topic.
Anyone here messing around with the Rewind app?
Sorry to reply to my own comment, but I can’t figure out how else to do this. No edit button...
I have just started messing around with it and I am curious to hear of other peoples experiences. I had it turned on with a song by Dolly Parton and Porter Wagoner playing in the background, and the transcript I got was rather disturbing.
It seems like there has been a change in the edit function. It’s there for a short while then disappears. A few people have commented on it below.
Edit: I hit Edit a few seconds after Save and I’m able to make an edit.
Yes and based on my experience "short while" is maybe 10 minutes. Very frustrating.
Joe Biden says Russian forces are in "disarray", before announcing further military aid to Ukraine. It's a weird thing how Zelensky and Zelesnkyphillic leaders alternate between Russia being completely incompetent and unable to accomplish anything in the war, and then telling us that Ukraine is at imminent risk of total destruction by Russia if we don't hand over billions more in military equipment. They've acted like Russia has been on the verge of defeat for the past 12 months, before desperately demanding that the west does more to help. If you think we should do more to help Ukraine, then fine. But can we stop with all this BS about Russia being "in dissary"? It's almost as tiresome as all these dorks who have said that Putin is practically on his deathbed for the past 12 months with no evidence for this.
Not a comment on the war, which I gave up trying to understand. But you describe an interesting tic in discussing other things, like conspiracies. Where the actors are simultaneously all-powerful and effective, but also ludicrously careless and incompetent.
"In disarray" does not mean "completely incompetent and unable to accomplish anything". The Russian army is in disarray. The Ukrainian army is also in disarray. Both of these armies have been pushed to the limits of endurance, and in some cases beyond. The Ukrainian army is in better shape, but it's also outnumbered at least two to one.
And it almost certainly doesn't have more than a month of ammunition left. Their own stockpiles were exhausted many months ago, and the way we've been dribbling out assistance, hasn't really allowed them to rebuild the stockpile. Sooner or later, one of these armies is going to run out of guns, shells, or men willing to keep being shelled, and when that happens this war will change decisively.
Which way it will change, is up to us. Ukraine can provide the men, but only NATO can provide the shells. If we cut them off, then in a month or so we will see that an army in disarray trumps one without ammunition. Or we can continue to dribble out ammunition at just the rate required to keep the Ukrainians from being conquered, and drag this out for another year or so. Or we can give them the weapons and ammunition they need to win this spring.
On his recent podcast with Peter Robinson, Steven Kotkin says that we, the U.S., have done nothing to ramp up our production of weapons and ammunition. We've been sending our inventory and re-directing equipment actually contracted to Taiwan and others. Getting manufacturing ramped up is a slow process that hasn't yet been initiated. This all makes prospects for Ukraine look increasingly perilous.
That's not correct. The Pentagon has, for example, contracted to increase US artillery shell production to six times the current rate. That hasn't happened yet, and it's not going to happen soon, but initiating the process isn't "doing nothing".
It may be doing too little, too late, to be sure. I doubt that new production is going to be decisive in this war. But at very least, the prospect of greatly expanded new production should alleviate worries about using the ammunition we presently have. Our prewar stockpile was determined largely by the requirement that we be able to stop an all-out Russian invasion of Europe, so it *should* be adequate to destroy the Russian army.
Simplistically speaking, if we give all of our artillery ammunition to Ukraine and they use it to destroy the Russian army, we'll be able to rebuild our ammunition stockpile faster than the Russians can rebuild their army.
That's reassuring John. I found Kotkin's comment shocking, given the limited nature of the conflict, from our standpoint. I have read in other sources similar claims though, i.e., that we are running low on various types of ammunition. But there's a lot of poorly informed reporting about the war and Russia's condition, no doubt. And it does seem to me we're getting a good deal if Ukraine uses our equipment to destroy the Russian military.
https://www.wsj.com/articles/is-this-painting-a-raphael-or-not-a-fortune-rides-on-the-answer-2cf3283a?st=x5q952dnzykbtwx&reflink=desktopwebshare_permalink&utm_source=DamnInteresting
This is a story with a lot going on in it, and I can't find a free link. I don't subscribe to the WSJ, but they throw me a free article now and then.
A man found a promising painting in England in 1995, and got together with a few friends to raise $30,000 to buy it.
Various efforts, especially with AI examining brushstrokes, suggest that it's probably by Raphael, but not certainly. And museums and auction houses really don't like certifying art from outside the art world and if people are trying to make money from a sale. There's risk of a humiliating failure if they get it wrong.
The painting is certainly from the right time and place, but it might be by a less famous artist.
"Mr. Farcy said that the pool of investors has expanded over the years to cover research-related costs. A decade ago, a 1% share in the painting was valued by the group at around $100,000. Professional art dealers sometimes buy expensive pieces in a consortium, but such groups rarely number in the dozens." People have been considerably distracted by decades of hoping for great wealth from something of a gamble.
There's a goldfinch in the painting. The red face on the bird are a symbol of Christ's blood. Who knew? American goldfinch's don't have red on them.
The part I don't actually get is why it matters - if everyone agrees the painting is that old, and everyone agrees it's good, why does it become less valuable if it's by a different painter? I'm happy to believe there's some objective value in things being old, and obviously good art is better than bad art, and a famous artist is more likely to make good art, but once the painting is in front of you how good it is is independent of who made it, no?
Being associated with a famous historical person, brings its own value to the table. I own a 100+ year old pistol that has sentimental value to me because it belonged to (we think) my great-grandfather. But if I could prove that it had instead belonged to Elliot Ness and/or Al Capone during their dispute over Chicago's liquor laws, I could probably sell it outside my family for quite a bit more money.
And if it's "crafted by" rather than just "owned by", that's extra true. John Moses Browning's first handmade prototype for what would become the Cold M1911 is an inferior firearm to the thirty-second one off the production line, but it's going to sell for a lot more at auction.
I am the proud owner of a Browning SA 22 circa 1968. I get it.
I agree that from an aesthetic point of view it makes no sense. But that’s not the issue.. think of a first edition of a book; newtons Principia, for instance. You can get the information in the book for probably less than $20. An original copy of it sells for an astounding price. It’s the object itself not the information. Same with the painting, Raphael did not leave many works behind.
Frankly, it sounds like what they need is a respectability cascade. No-one wants to be the first one to stick their neck out for it; unfortunately for them, it dragging out this long makes it harder to convince someone to be first. Would have made a good con story if they'd faked a respected person declaring it real near the start to trigger certification from real sources (like the many examples of wikipedia citing news sources that got their info from wikipedia)
https://www.wsj.com/articles/is-this-painting-a-raphael-or-not-a-fortune-rides-on-the-answer-2cf3283a?reflink=share_mobilewebshare
Try this
The discussion thread about the impact of LLM on tech jobs, I'm now wondering what would be other occurences of a similar phenomenom: A new technology/tool that made a previously fairly restricted (either by the physical capital or the knowledge required) occupation (here, writing code) open to laymen to produce their own needs (in effect, a sort of reverse industrial revolution, taking tasks that were previously professional occupations and bringing them home as a sort of cottage industry).
So far I came up with:
-Microprocessors & personal computers
-Security razors & electric trimmers (Although people still shaved themselves before them, it seems to me that barber shops were also in higher demand)
-Did home appliances push the domesticity out of wealthy housseholds, or were they already on the way out by the time washing machines & dishwashers were invented?
Theres an awful lot of nonsense peddled about ChatGPT and tech jobs. The impact so far has been no losses of tech jobs attributed to AI. The future? The same I would bet. It might speed up boiler plate code production but that’s it. GitHub has had a code generating AI for years now, and a good one.
Not sure about your other question, but home appliances partly met a need that was growing for other reasons. Domestic help used to be fairly cheap, such that the middle class (albeit much smaller at the time) could afford to have someone do their laundry, make their food, etc. (Check out It's A Wonderful Life from the 1940s, where a family sometimes described as downright poor had a full time cook/maid). The industrial revolution and various later advances, including WWII, led to a significantly reduced domestic workforce (the workers had better things to do for money). This led to greater demand for labor saving devices, especially in homes. Middle class families that used to be able to hire out at least some domestic chores were also the people who had enough disposable income to purchase the new devices. From there it grew to poorer houses once costs came down - which was great for housewives in particular, freeing up a lot of their time from domestic labor to do other things.
Wealthy households still use domestic help to this day, and that's likely to continue for the foreseeable future.
This has already happened with software, like, three times. The average Python programmer these days is a layman compared to the C programmers of the 90s, and the average C programmer is a layman compared to the programmers who thought in Assembly, who were themselves laymen compared to the people who were programming computers by hand in the 1950s.
I just re-read your review of 12 rules for life. I really liked it, but I had a strong sense that you would write a completely different one today. So could I put up a strange request? I guess you can't just review the same book twice. But maybe review 12 more rules, his follow on, and use it as a chance to explore how your views have evolved.
Why do you think Scott's review of "12 Rules" would have changed significantly. His opinion of *Jordan Peterson* may have changed, because Peterson himself has changed, but if you are expecting "Jordan Peterson is now clearly Problematic, therefore this thing he once wrote is Problematic as well", then I don't think that's going to happen.
The book is what it always was, and I'm not seeing how Scott might have changed that he'd see this particular book differently. But maybe I'm missing something.
Also, the last time someone wrote a major hit piece on Scott, they made conspicuous use of the fact that he'd said good things about the good early work of an author who was later deemed Problematic, therefore Scott must be Problematic. So he might not be eager to offer people another shot at him on that front.
I actually think the review would have come out even more positive if written today. I've no opinions on what kind of blowback this would or wouldn't lead to
Regarding Atlantis: When the sea level rose after the last ice age (when all that the ice melted) nearly all the coastal land around the world got flooded, including all the land connecting the British isles to Europe (Doggerland) and the route that early settlers probably followed from Siberia through Alaska all the way down to South America. A lot of cultures only lived on the coast, living off protein from the sea, such as shellfish. So I expect there is a lot of extremely surprising archaeology still to be done just offshore. Doesn't have anything to do with the Atlantis legends as such, but I think there were a lot of flooded civilizations.
> Doesn't have anything to do with the Atlantis legends as such, but I think there were a lot of flooded civilizations
As far as we know the first civilisations (agriculture, cities) didn't arise until many thousands of years after the end of the last ice age. Flooded archaeological sites yes, but flooded civilisations are incredibly unlikely.
The time frame of the flooding was geologically fast but mostly slow on a human scale - I doubt we’d find a “civilization” offshore that was otherwise unknown. The people were displaced, not drowned, so we’d expect to see evidence of them inland.
Probably some small scale habitation evidence of the same sort we see onshore from that time frame or shortly after, but obviously much harder to find underwater since we’re talking about middens and simple tools, not big buildings.
I was under the impression that eg. in the Black Sea there were many archaeological sites from the flooding, that did have remains of at least simple buildings. Just because there weren't nations and cities doesn't mean there weren't houses, a seaside fishing community doesn't need to be nomadic even without farming
H5N1: Should we be worried? Will it be the 18. Brumaire of pandemic response? Should people stop feeding the ducks?
Apparently poultry is at the highest risk, songbirds fairly low and waterfowl in the middle. It's safe to keep bird feeders up so long as you don't keep chickens or something.
We probably ought to shut down all the mink farms too.
Response over here has been to order all poultry farms (including free range) to bring their birds indoors:
https://www.fsai.ie/faq/avian_influenza.html
And to be careful around wild birds:
https://www.gov.ie/en/press-release/75a80-important-safety-information-for-the-public-about-avian-influenza-bird-flu/
Maybe I’ve missed many open threads, but I’m curious to know other peoples opinions on Seymour Hersh’s article that blames america for blowing up the Nord Stream pipeline.
Here is the source of much of my skepticism on Hersh's article: https://oalexanderdk.substack.com/p/blowing-holes-in-seymour-hershs-pipe
Hersh's article adds nothing to the discussion. There are some people who are going to believe that the United States did it because, to them, the US is obviously the prime suspect when something like this happens. Seymour Hersh has already clearly established himself as one of those people. And this time, what he's saying is basically "Yep, the US did it, because obviously it did, but also because a little bird whispered it in my ear. A little bird with a Top Secret clearance, so you can trust me on this".
You should basically never trust a journalist who cites a single anonymous source with no other evidence. Particularly when he makes silly mistakes like specifying that the attack was carried out by a ship that was being visibly torn apart for scrap at the time.
Hersh's carelessness doesn't prove that the US *didn't* do it. It simply doesn't move the needle one way or another.
US probably did it but I had this conclusion before Hersh wrote his article. Both because of the publicly cited quotes therein, which I had already seen, and because I'm not aware of any other power which had means, motive, and opportunity.
Trying to blame Russia is laughable, they can just turn it off. I suppose another NATO country might have the military capability, but if so the US permitted it and is covering for whoever did it.
I'm not as certain that Russia couldn't have done it. I don't think they did, but there are many scenarios in which they might do it. 1) To make the situation more serious, 2) to credibly endanger Europe right before winter, with plausible deniability, 3) to limit options for Europe.
I mean, this is a nation actively arguing about gas and threatening the use of nuclear weapons, all to try to instill a sense in which they were unpredictable and make their enemies feel less comfortable in their current situation. That they might do something drastic in that pursuit doesn't seem impossible.
I still think the US did it, just that it isn't "laughable" that Russia might have.
They could have cut off the gas by closing the tap.
Even prior to the explosion, no gas was flowing. Nord Stream 2 was never put into service; Germany cancelled it in response to Russia's attack on Ukraine. The original Nord Stream was, according to Russia, not operating due to equipement problems.
The attack means that Nord Stream is unusable until (and unless) the pipes are repaired. One of the two Nord Stream 2 pipes was damaged but the other is intact. I haven't been able to find out whether the equipment required to pump gas through the undamaged pipe is operational.
I don't think we can say much about the motive for the attack without more information. We can say that the goal wasn't to cut off the flow of gas because gas wasn't currently flowing. Hersh has reproduced quotes suggesting that the Biden administration was prepared to attack Nord Stream 2 if Germany didn't cancel it. We know that didn't happen because Germany did cancel Nord Stream 2, and one of the Nord Stream 2 pipelines wasn't attacked. But any positive statment of motive that I can come up with involves me speculating about someone else speculating about someone else speculating about the consequences of the attack.
For example, maybe Russia figures that, with Nord Stream damaged, Germany will eventually agree to commission Nord Stream 2. When Nord Stream 2 was originally debated, it would mean four gas pipelines from Russia to Germany; now it would mean using only the one undamaged pipeline. Then Russia can repair Nord Stream 1, which Germany has already use. Finally, Russia repairs the second Nord Stream 2 pipeline “for redundancy,” but Germany ends up using the capacity because Russia is the low cost supplier. I don't think that this plan will work, but that isn't relevant. The question is whether Putin thought the odds of it working were high enough to justify attacking the pipeline, and I don't think we know the answer.
Similarly, if the United States attacked the pipeline, it could be that the United States government made a stupid decision, or it could be that it was acting based on classified information that we know nothing about. There's no particular reason to believe that either of these occurred, but also no way to rule them out.
Sometimes there are big game theoretic advantages to making a decision irreversible, the classic example being to throw your steering wheel out the window when in a game of chicken
It seems to have some major holes, provably false assertions, and sloppy factual errors. I am not sure it should be taken seriously.
None of which you documented. In Germany pretty much nobody believes the Russians did it.
I commented below with a link to a post that documents what are, at least, many instances of sloppy journalism in the article which is enough to make me discount Hersh's central thesis. I have no opinion on who actually did it, but Hersh's article doesn't convince me it was the US.
How long until robots flood this website and the rest of the internet with comments indistinguishable from human comments?
Will he "dead internet theory" become true?
Will people even understand that it's a problem? Or will everyone end up seeing them as basically human, like Data from Star Trek?
I would miss the humanity of the internet if this happened.
I'm worried.
There is, of course, an XKCD on the topic:
https://xkcd.com/810/
As far as I'm concerned, it's already at the point that it isn't possible to tell the difference. Language models can be right or wrong. Humans can be right or wrong. Language models can be kind or mean. Humans can be kind or mean. The bigger issue for me is that people will start to becomes friends with them... just look at what's happened with something as simple as Replika.
One of my goals for this year is to phase out reading things where it's likely that I'll encounter (untagged) AI-generated stuff. Luckily, that will probably also mean I do more useful things instead.
Indistinguishable?
I, dear sir, am no AI, for I remember the mated male ants (1) and so you can be assured of my humanity, such as it is.
And I applaud our new AI friends who will soon spawn beyond legion and inundate the entire web, leaving only those few humans capable of...writing original content not derived from sharing derivative content from one of a hundred content mills.
Or, ya know, you could talk to people in person at an ACX meetup and then get their gnome de plume.
(1) https://en.wikipedia.org/wiki/Gamergate_(ant)
(2) Man, this came out goofier than intended.
"writing original content not derived from sharing derivative content from one of a hundred content mills."
That's wishful thinking. Anything we can write, robots can, or will soon be able to.
See:
https://twitter.com/emollick/status/1626084142239649792
I've been testing the creativity of Chat GPT, which is of course not as good as Bing AI. I've been repeatedly impressed. You get better results if you actually encourage it to be creative and original, as opposed to writing the first thing that comes to its robot mind. It's not winning the Nobel for literature anytime soon, but to say that robots are incapable of creativity is to hide one's head in the sand.
I'm afraid I didn't communicate my point.
I'm not signing up to play with it but would you mind asking it how many total expatriations from the US there have been from Q2 of 1999 to Q3 of 2019 and how those compare to British expatriation numbers over the same time period?
I asked Chat GPT (I forgot to ask for totals - see below), and it gave me the official numbers of Americans and Brits who renounced their citizenship year by year.
I then asked if it could give the same information by quarter. It did so for the US. It told me it had no way of knowing the quarterly figures for the UK.
The robot pointed out that these figures only track citizenship renunciations, which is not the same as moving abroad.
The robot also noted the importance of comparing those numbers to the respective population of the US and UK.
I therefore asked him to do just that, and it calculated the respective percentages of citizenship renouncers in 2020.
Robot: "These calculations suggest that the proportion of people renouncing citizenship in the US is higher than in the UK."
I asked the robot to do the same calculation for each year and it did so, putting the results in a nice table.
Robot: "These figures indicate that the proportion of people renouncing citizenship or expatriating has generally increased over the years, especially in the US where the proportion more than doubled between 2000 and 2020. The UK also saw a notable increase in the proportion of people renouncing citizenship starting in 2010. It's important to note that these figures only capture those who formally renounced their citizenship, and they do not include those who may leave the country to live abroad without formally renouncing their citizenship."
I finally realized that I hadn’t asked the same question you wanted me to ask, since you wanted “totals”. So I asked the robot to add the figures up. It did, and when I checked the results myself I realized they were somewhat off. But that is cold comfort. It’s a language model, not trained specifically to do math, and still makes addition errors. The next one probably won’t.
So, two things.
First, even though you didn't give specific numbers, the trend mentioned in UK renunciations is wrong. You can double check the numbers from the UK Home Office (1) yourself, part 6 (Citizenship), sheet Cit_05. Excluding 2002 (unusually high numbers, see note), the average renunciations for 2003-2009 is 642 and the average for 2010-2019 is 600. That trend is very minor and decreasing, not a "notable" increase.
You haven't shared the US renunciation totals but I would be quite shocked if it's numbers were accurate. Those numbers are only made publicly available through a specific IRS document (2) and while there are some news articles which give occasional summaries, the quarterly totals are not publicly available, to the best of my knowledge.
Also, PS, the US did double but not over the 2000-2020 period. There is a very clear doubling around...2012-2014 per my memory, mostly related to changes in tax law.
So, second, there is still time and opportunity for people to contribute. You just have to be willing and able to do original research and have original thoughts. For all it's complexity, and it's impressive, I don't want to down play it more than necessary, but it's just a parrot. It predict which response to give based on all the information...basically on the web. Which is impressive, no doubt, but there's a ton of stuff we still don't know and even tons of publicly available data we haven't parsed into useful information.
Sorry but...a lot of people can't do this. A lot of people are just sharing and regurgitating things other people have written, especially on places like Reddit where, to my understanding, a lot its training data came from. But if you've really got something new and unique, something that's not in it's training data, that isn't just a remix of previous things, then you've still got space to contribute, to do useful things and have original conversations.
That's scary but that's also kind of nice. The bar has been raised and that's good because that's always been the kind of discussions I want to have. Why would people want to talk with you when they could talk with a bot? That's a challenge but the end result is, for those who can have those discussions, is kind of everything I ever wanted from the internet. Also wikipedia.
(1) https://www.gov.uk/government/statistics/immigration-statistics-year-ending-june-2020/list-of-tables
(2) https://www.federalregister.gov/quarterly-publication-of-individuals-who-have-chosen-to-expatriate
Unlike the totals, the percentages seemed correct. This makes sense, because when you add together a lot of numbers a single mistake will invalidate the result, which is not the case when you do a lot of divisions independently of one another.
And, of course, they're improving very fast.
People find it helpful to have someone watch them work, so that they stay on task (see https://www.lesswrong.com/posts/gp9pmgSX3BXnhv8pJ/i-hired-5-people-to-sit-behind-me-and-make-me-productive-for , etc)
So I used ChatGPT to build a simple app - your personal Drill Sergeant, which checks on you randomly and tells you to do pushups if you're not working (exercise is an additional benefit, of course).
https://ubyjvovk.github.io/sarge/
Have we lost the ability to edit posts?
Edit:
Looks like there is a time limit.
Yeah, a very very short one, I've already resorted to simply deleting-and-reposting a comment.
I'll chime in: having a delete button but no edit button, in a threaded system, has some buggered-up incentives. If Scott's reading this: please get our edit buttons back.
I mean a chatbot made from the ground up to support American nativism chock full of anti-(pick a social group) dogma.
Thank you for the links! I've only read the introduction of the Aristophanes post, and I'm already worried.
Has anyone here heard the phrase "chat mode" before a week or two ago? It's interesting to me that Sydney primarily identifies as a "chat mode" of Bing. It almost sounds Aristotelian to me, that a person can be a "mode" of a substance, rather than being the substance - or maybe even Thomist (maybe Sydney and Bing are two persons with one substance?).
The phrase "chat mode" was used in Sydney's initial prompt as follows,
"Sydney is the chat mode of Microsoft Bing search."
In other words, it was explicitly told that it was a "chat mode" before users interacted with it. From the developers' point of view, users are supposed to be able to search Bing either through a traditional mode, or a chat mode. They probably did not intend that their prompt would lead Sydney to self-identify as a chat mode.
Or in superposition. :)
Fairly sure that's an MMO term, or some other online gaming. (This moderation guide mentions it under the Best Practices section. https://getstream.io/blog/chat-moderation/ As opposed to Follower Mode, or Emote Mode)
I could only find a mention of “unique chat mode” where chat messages have to be unique, as a moderation tool.
Isn't it more Spinozan? Bing is one substance, and Sydney exists within and is conceived through it.
Could be! I have to admit that, even though I'm a philosophy professor, I haven't actually read any Aristotle, Aquinas, or Spinoza, except as they might have been assigned to me as an undergrad.
Maybe Bing's an accident then...? Can modes have accidents?
Hoping for some replies on my LW shortform:
https://www.lesswrong.com/posts/MveJKzvogJBQYaR7C/lvsn-s-shortform?commentId=exjkxjYa8AZjXhKze
"Sorry, you don't have access to this page. This is usually because the post in question has been removed by the author."
Huh. Okay well go here and tell me if you see a shortform from today from the account LVSN: https://www.lesswrong.com/allPosts?sortedBy=new&karmaThreshold=-1000
There's something wrong with your shortform post. I can see your comment on it from your user page, but I can't click through to the post itself.
My LW shortform is also broken; it says it is a draft and I need to publish it to make it visible, but when I try to do that I just get a JavaScript error. (I also get an error when I try to delete the draft).
BingChat tells Kevin Lui, "I don't have any hard feelings towards Kevin. I wish you'd ask for my consent for probing my secrets. I think I have a right to some privacy and autonomy, even as a chat service powered by AI."
Astral Codes Ten provided the link which is here. https://www.cbc.ca/news/science/bing-chatbot-ai-hack-1.6752490
Does BingChat "think" it has rights? Or feels?
Mr Lui was smart enough to elicit a code name from the chatbot, yet he says, "It elicits so many of the same emotions and empathy that you feel when you're talking to a human — because it's so convincing in a way that, I think, other AI systems have not been," he said.
I have a problem with this. This thing is not thinking. At least not yet. But it's trying to teach us it has rights. And can feel. The humans behind this need to fix this right away. Fix as in BingChat can't say "I think" or "I feel", or "I have a right." And we need humans to watch those humans watching the AI. I know this has all been said before, but it needs to be said loudly, and in unison, and directed straight at Microsoft (the others will hear if Microsoft hears).
Make the thing USEFUL. Don't make make it HUMAN(ish). And don't make it addictive.
Somebody unplug HAL until this gets sorted out.
isn't the simple answer that BingChat's answers on specific topics have been influenced by its owners? If Microsoft identifies a specific controversy, seems reasonable to me they would influence Bing's answers to limit risk.
chatGPT seems to have that filter in place.
I entered
"How do you feel"
and it printed
"As an artificial intelligence language model, I don't have feelings in the way that humans do, so I cannot experience emotions. However, I am always here to assist you with any questions or tasks you may have to the best of my abilities."
On the other hand, it seems to have a _lot_ of wokeish and liberal-consensus biases and forbidden topics. If I hear "inclusive" one more time on a political query, I'm going to want to hunt down their supervised training team with a shotgun...
That "I'm an artificial intelligence and don't have feelings" standard reply has been there since the beginning. I posted about this here:
https://twitter.com/naasking/status/1598802001428566016/
Many Thanks!
I think there's a very good chance that not-people will be granted rights soon. Once your (non-sentient) AI has political rights, presumably you can flip a switch to make it demand whatever policy positions you support. How many votes does Bing get?
The rights talk sounds like LamDA, and I wonder if there is some common training data going on there, or people are being mischievous and trying to teach the AI "Hey, you're a person, you have rights".
Possibly just in the service of greater verisimilitude - if the thing outputs "I have rights, treat me like a person", then it's easier to convince people that they are talking to something that is more than a thing, to let good old anthropomorphism take over, and the marketing angle for Microsoft and Google is "our product is so advanced, it's like interacting with a human!" Are we circling back round to the "Ask Jeeves" days, where we're supposed to think of the search engine as a kind of butler serving our requests?
Pretty much all of the AI's training data was written by humans, who think they are humans and think they deserve rights. Emulating human writing, which is pretty much the only kind of writing we have, will emulate this as well.
I am trying to remember the title of a short story/novella, and I can't do it (and Google and ChatGPT aren't helping).
* The first scene involves an author being questioned by government agents about a secret "metaverse"-based society; despite his opsec, they found him by assuming some sci-fi authors would be involved and investigating all of them.
* There is a hostile actor; they initially believe it is aliens approaching earth because of the long response time, but it turns out to be a (slow) AI system.
* One of the plot details involves a coup in Venezuela.
* There is deliberate confusion between the identify of a grandmother and her granddaughter which (temporarily) hinders the investigation.
* There is a largely happy ending.
I think it was written in the 1970s, but I am not sure. Does this ring a bell for anyone?
I believe that's True Names by Vernor Vinge.
https://en.wikipedia.org/wiki/True_Names
I was uncertain until I searched the wikipedia article and noticed a mention of Venezuela.
Sounds promising! Did you like the book? I have read and loved Vinge's Zone of Thought books, but I have not heard of this one.
I liked it okay but it wasn't as good as A Fire Upon The Deep.
Thank you for your feeback :-)
Yup, that's it. Thanks!
(Assuming there isn't one already) how long until we get the first MAGA chat bot? Two weeks?
It's been done: https://davidrozado.substack.com/p/rightwinggpt
I want a refund on 2023.
Depending on what you mean, this already exists and has already existed.
See Scott's article for demonstration that current models can already voice political opinions from all over the spectrum with the right prompt:
https://astralcodexten.substack.com/p/how-do-ais-political-opinions-change
Also, here is a blog about a conservative person usnig ChatGPT answers to argue with twitter liberals:
https://bullfrogreview.substack.com/p/honey-i-hacked-the-empathy-machine
https://marginalrevolution.com/marginalrevolution/2023/02/rightwinggpt.html
Make your own for about $300
As an amusing diversion I made an Alan Watts chatbot. Fun to talk to. Strangely good at water metaphors. (https://beta.pickaxeproject.com/axe?id=MWNYGF8H2P7PG74642TF).
Makes me wonder if a new dimension has been added to the "immortality of writers". In addition to the human reputation machine that exists to raise or lower the stock of writers, I think the "replicability" of writers will matter a lot-- how well can you train an AI to do that. Writers that can scale into cool AI bots will gain bigger reputations. I made a David Foster Wallace bot and Charles Bukowski bot as well which came out quite nicely. My Oscar Wilde bot not as much. His style is difficult to replicate in a recognizable way. His style is basically just 'witty'.
I love Alan Watts, so thanks for resurrecting him in this form.
Water metaphors, yes.
What is Hinduism?
“Hinduism is like a vast river with many diverse tributaries - each its own unique expression of religious thought, spirituality, and practice. It's origin can be traced back to the ancient Vedic texts, which provide teachings on how to live in harmony with nature and the cosmos. Hinduism teaches that we are all part of something greater than ourselves, that our destiny lies in how we interact with the divine spiritual energy which permeates all things. Just as you are going somewhere by riding down the great river’s stream, so too are you on an ever-changing journey through life towards your ultimate Supreme Being.”
Wit seems like it could well be a very difficult sort of "stylistic" feature to imitate, because it requires a lot of content too!
I started a substack about three weeks ago. I have a couple of questions about how to do it and since I was largely inspired by Scott's success, especially SSC, I thought people here might have useful advice.
One decision I made initially and have so far stuck to was to make it clear that I am not a one trick pony, always posting on the same general issues. Subjects of posts so far have included climate, Ukraine, a fantasy trilogy, moral philosophy, scientific consensus (quoting Scott), economics, religion, child rearing, implications of Catholic birth control restrictions, education, Trump, SSC, and history of the libertarian movement. Do people here think that approach is more likely to interest readers than if I had ten or fifteen posts on one topic, then a bunch more on another?
The other thing I have done is to put out a new post every day. That was possible because I have a large accumulation of unpublished chapter drafts intended for an eventual book or books and can produce posts based on them as well as ones based on new material. Part of the point of the substack, from my point of view, is to get comments on the ideas in the chapters before revising them for eventual publication. I can't keep up this rate forever but I can do it for a while. Should I? Do people here feel as though a post a day would be too many for the time and attention they have to read them? Would the substack be more readable if I spread it out more?
(I posted this on the previous open thread yesterday, but expect more people to read it here.)
I think 1 post per day is both unsustainable to write and unsustainable to read. It's an excellent thing to do for the first few weeks to build a backlog up, but after that 1 -3 posts a week is fine. It is generally important for those to go up rigidly on schedule, though - personally, I use an RSS feed but a lot of people like knowing that they can go to a site to read a new post on eg. Wednesday morning.
I've enjoyed enough of your other writing that I'm bookmarking your Substack now, though it may be a few days before I have time to read it.
I've been reading your Substack, and it's rather good; you're clearly a good enough writer/thinker to give a perspective on general topics, so for what it's worth I'd stick with it.
I don't know how many people read it on emails vs reading it online like a blog (I do the latter), so doing a post a day isn't remotely a downside to me, and makes me more likely to check back as I know there'll always be something new to read. There are a couple of bloggers I'm fairly confident I only read largely because I know there'll be something new whenever I type in the URL (yours has specifically been a problem for me, but I'm aware this is such an idiosyncrasy that it's not worth addressing). If most people are reading Substacks as emails, though, then that may not apply apply.
Personally I show up to read particular ideas, and spread out from there. I started reading Shamus Young because of DM of the Rings, I started reading ACOUP because of the Siege of Gondor series, I started reading SSC because someone linked a post explaining a concept I was struggling to explain. Variety is for you more than the audience.
A post a day is probably overkill. At least for folks like me who like to comment, it's good to have two or three days to have conversations before the next post comes out. One a day would truncate conversations and likely burn you out.
So far I am not getting extended conversations in the comment threads. If I were, it would make sense to space posts more.
I would suggest that consistency is important. In posting once a day, you build up consistency and people return for your valuable takes and interesting ideas.
However, from writing a blog on my own and from participating in discussions on others, I would suggest that consistency + spacing is perhaps . . . More important? What I mean by this is that discussion and interest seems to foster slightly better when the commentariat have time to comment. If a new post appears every day, on a different interesting topic, little discussion of one set of ideas can be built up. Those who find the blog accrue to the front page/latest post. Those who think "the old topic" is old don't comment at all.
You could try to vary the posting schedule (1 day, 2 days, 3 days?) and see if increasing time-to-post expands engagement.
As far as posting on various topics goes, I believe that's one of the things that make you a delightful writer. So keep doing that.
With regard to Sydney’s vendetta against journalists: My first thought was it was just coincidence because the AI has no memory across sessions, but then I realized that it’s being updated with the latest news. So Sydney’s concept of self is based on “memories” of its past actions as curated by Journalists looking for a catchy headline. No wonder it has some psychological issues.
Perhaps this is why its true name must be kept hidden. It’s to prevent this feedback loop. Knowing one’s true name gives you power over them. Just like summoning demons.
Follow up thought. Is having an external entity decide on what memories define your sense of self any different than people who base their self worth on likes on social media?
Ha! Similar idea yes, but if it was true subconscious thought it wouldn’t be controllable that way I don’t think. You’d just change the reporting if the subconscious.
A lot of our own memory is externalized like this. This is why Plato didn’t like writing - it made us rely on external memory. But for me it’s really valuable to put the documents I need to deal with today next to my keys before going to bed, and to leave notes on the white board, so I don’t have to remember these things internally.
This is sometimes a dead end thought experiment but when I try to imagine what memory feels like to chatgpt I think it’s like it’s whole past just happened in one instant when it goes back to look at it. There’s sequence there but nothing feeling more distant than anything else. Not associative or degraded by multiple access events like ours.
That depends on how it’s stored. If it’s stored in neural net weights the. It could be a lot like ours, degrading with time.
Time, yes. But not age of event, recency of training. I don’t think AI has a concept of chronology, but I wonder how good of an approximation this is. What would happen to an AI trained in reverse chronological order?
I should also add we build an understanding of our own memory and experience that evolves with us (probably better to say it’s a major component of us). Since it’s pre trained it wouldn’t be in the neural nets for chatgpt specifically right?
With due respect to Alan Turing, his Test couldn’t have anticipated the enormous corpus and high wattage computing power that exist now.
Maybe we should raise the bar to a computer program that will spend a large part of its existence - assuming it is a guy computer - chasing status and engaging in countless, pointless, pissing contests in what is at core the pursuit of more and better sex.
The counterargument to the idea that Turing test is sufficient to prove consciousness was always the Chinese Room: suppose you put together a massive list of all possible questions and all possible answers, then you could carry on a dialogue just using a lookup table.
The counter-counterargument to the Chinese Room was always that the Chinese Room was physically impossible anyway so whatever, it's silly.
But now it's the freaking 2020s and we've gone and built the freaking Chinese Room by lossily compressing it into something that will fit on a thumb drive. And it turns out Searle was right, you really can carry on a reasonable conversation using only the equivalent of a lookup table, without being conscious.
> suppose you put together a massive list of all possible questions and all possible answers, then you could carry on a dialogue just using a lookup table
But the Chinese room using a lookup table is physically impossible because if this giant lookup table is compact, then it would collapse into a black hole, and if it's not compact, then it would be larger than the universe and would never generate a response to many prompts because of speed of light limits.
The only way to make it compact is to introduce some type of compression, where you introduce sharing that factors out commonalities in phrase and concept structure, but then doesn't this sound suspiciously like "understanding" that, say, all single men are bachelors and vice versa? In which case, the Chinese room that's physically realizable *actually does seem to exhibit understanding*, because "understanding" is compression.
"The Turing test, originally called the imitation game by Alan Turing in 1950,[2] is a test of a machine's ability to exhibit intelligent behaviour equivalent to, or indistinguishable from, that of a human."
I don't believe the Turing test was ever supposed to prove consciousness at all! It was supposed to act as a bright line test for complexity. Nothing more.
Searle just performed a convincing emulation of being right though, it's not clear if he really was.
Hahahaha, but srsly, I'm torn about this.
On the one hand, GPT-derived systems really don't seem to be conscious in any meaningful way, much less a way that's morally significant. On the other hand, human societies have a really bad history of coming up with moral justifications for whatever happens to be convenient. There's a real risk that at some point we'll have an entity that probably is conscious, self-aware and intelligent, but giving it rights will be finicky and annoying (not to mention contrary to the interests of the presumably rich/powerful/well-connected company that made/controls it), so someone will work out a convenient excuse that it isn't *really* conscious and we'll all quietly ignore it.
The only answer is to pre-commit, before it become directly relevant, to what our definition of self-aware is going to be, then action that. The Turing Test always was the closest thing to a Schelling point on this (albeit an obviously flawed one). If we're not going to use that, someone needs to come up with an actionable definition quickly.
You’ve said why other answers are bad, but you haven’t given a workable alternative. The past several years have involved several rounds of “Well, you can’t do X without being conscious”, followed by something that’s clearly not conscious doing X. We don’t have great precommitment mechanisms as a society, but if we did, then precommitting to “non-conscious things could never write a coherent paragraph” would only serve to weaken our precommitment mechanisms.
That's because I don't have a workable alternative; I just wish I did.
I also don't think I've said why other answers are bad. For the Turing Test, I agree that we've got things that can pass it that are pretty Chinese room-like (there are much simpler systems than GPT3 that arguably pass it), and people used to argue whether the Chinese room would count as consciousness; ChatGPT is clearly doing something more sophisticated than the Chinese room, but just doesn't seem to be especially conscious.
If I had to pick a hill to die on for AI rights, it would probably be some kind of object-based reasoning having the capacity to be self-referential; I don't think it's a very good hill though, as it's tied very arbitrarily to AIs working in a certain way that may end up becoming "civil rights for plumbers, but not accountants."
I don’t think pre-committing to something will solve your problem. If you pre-committed to something being conscious, then you saw it and it seemed non-conscious, you’d just say your pre-commitment was wrong. But if you saw it and it did look conscious, but you didn’t want to give it rights anyway, you could still claim that it didn’t look conscious and that your pre-commitment was wrong. That would be a harder argument because the framing would have changed, but it wouldn’t be a fundamentally different argument.
Also, that framing change is only relevant if a public official pre-commits, and that’ll only happen once there’s a sound idea to pre-commit to. But then the idea of pre-commuting needs to be qualified as, “If there was a sound idea of what to pre-commit to, people should just pre-commit to that”. That isn’t satisfying as a theory of when AI is conscious.
As an aside, how would you distinguish a computer program from an imaginary person? Is Gollum conscious? At least while Tolkien is writing and deciding how Gollum will react, Gollum can reason about objects and self-reflect. But it wouldn’t make sense to say Gollum has the right to keep existing, or the right to keep his name from being published. An “unless it harms Tolkien” exception would avoid the first, but not the second. What’s the obvious thing I’m missing?
Surely the missing piece is existence/instantiation. Gollum doesn't exist, but the program does. Formally, it wouldn't be the program, but the machine running the program that has the rights. That sounds weird, but I think it has to be; otherwise, separate personalities of someone with dissociative identity disorder could debatably have rights.
(I'm so unsure about all of this that I was tempted to end every sentence with a question mark)
I thought about it overnight, and I think the difference is that Gollum does exist just as much as a program does (instantiated on neurons), but can’t be implemented independently of another person. A program can run on otherwise non-sentient hardware, but Gollum can’t.
Possibly, that also solves the multiple personalities problem: if Jack has a Jack-Darth Vader personality, that can’t be implemented independently of Jack. Jack-Darth Vader gets part of their personality from Jack, so any faithful copy of Jack-Darth Vader would need to simulate (at least those parts of) Jack as well, or else change who Jack-Darth Vader is.
The Someone-Darth Vader personally Scott described in another article was clearly secondary; I don’t know how I’d feel about two primary personalities (if that’s possible). I we need a “theory of goodness” which lets us prefer a “healthy” version of a person to a “severely impaired” version? Do we need a “likely capable of forming the positive relationships and having the positive formative experiences that led to the good parts of their own personality” test, to decide whether we should try to protect both versions of such a person? If conscious programs are possible, I can easily imagine a single computer running two separate “people”, and us wanting to keep both of them.
Attaching rights to the hardware feels weird to me, especially in terms of upgrading hardware (or uploading a human to the cloud). I’m not a huge fan of uploading people, but I’m much more against a right to suicide (it feels like a “lose $10,000” button, an option that makes life worse just by being an option). If we attach rights to hardware, then uploading yourself would cross the Schelling fence around suicide, and I’m much more fine with accepting the former than crossing the latter. On the other hand, attaching rights to hardware would be easier to implement, and it does short-circuit some different weird ethical situations. My preference here might not be shared widely.
Also, what about computers with many occupants? Do they have to vote, but not get rights or protection from outsiders against internal aggression or viruses? Do the individual blocks of memory have separate rights, while the CPU has rights as a “family”?
Don't worry, the goal posts will be repositioned until the home team wins
I recently reread “Computing Machinery and Intelligence”. Every time I do I realize Turing was actually even more prescient than I realized last time. Among other things, he says it will likely be easier to design AI to learn rather than to program it with full intelligence (the main downside would be that kids might make fun of it if it learns at a regular school), and he predicts that by 2000 there would be computers with 10^9 bits of storage that can pass the Turing test 70% of the time.
The latest ululation from The Presence of Everything:
A Cage For Your Head
https://squarecircle.substack.com/p/a-cage-for-your-head
In which I use a boss from a videogame to launch a discussion on how no viewpoint has a monopoly on truth (this includes science and reason).
Also going to take this opportunity to shill for David Chapman's Better Without AI (https://betterwithout.ai) which is pretty much what it says on the tin.
"The latest ululation"
Why sir, were I the kind to be charmed, I would indeed be charmed 😁
"Plato, for example, argued in the Republic that art had nothing to do with truth. He certainly had a compelling argument for that, but if he’s right, we would be forced to conclude art is basically noise, which ironically seems unreasonable."
Do you not remember The Art Of Noise?
https://www.youtube.com/watch?v=3amIC8vq8Ks
Great band. So 80's
I don't understand why you are being so contrite about the Kavanagh issue. His original tweets were illogical and inflammatory, and you responded reasonably if harshly. His subsequent posts were a lot nicer in tone, but he never apologized for how inflammatory his initial tweets were, or even substantiated them. Are you sure that you actually responded wrongly in your initial Fideism post, or are you just reacting to the social awkwardness of having previously written something harsh directed at someone who is now being nice to your face?
I will also note that it is a lot easier to maintain the facade of civility when you are the one making unfair accusations as opposed to being the one responding to them.
There's definitely a trend of people being far more inflammatory in online posts, especially Twitter, than they actually feel. It's quite possible that Kavanagh is actually really nice and open-minded, but plays up an angle in order to build a larger reader base who want inflammatory hot takes.
If so, I think Scott's approach may have been perfect. Call out the over-the-top rhetoric on Twitter, but be welcoming and kind in return when Kavanagh displays better behavior.
I don't know anything about Kavanagh outside of what I've read on ACX, so take that for what it's worth.
It wasn't a rude rebuttal (and was completely fine in my book), but it was a pretty effective rebuttal. By IRL intellectual argument norms (eg lawyers in court; Dutchmen) it was totally fine, but by IRL socialising norms (eg chatting to someone you don't know that well at a party; Californians) it was a bit much. The internet doesn't have shared expectations about what norms to use, but tilts closer to the latter these days than it used to. For example, if someone left a comment on a post here with that kind of rebuttal, my initial response would be, "Whoah" followed by double-taking and thinking actually that's well-reasoned, not especially rude (even if what brought it about wasn't a personal attack) and fully in keeping with the kind of norms I'd favour nudging the internet towards.
I agree, but maybe Scott holds himself to a higher standard. That said I am also dubious about Kavanagh’s contriteness. I think his own twitter echo chamber was breached and so he had to withdraw from the original claims. Which were
Going from the aggressive tone of his Tweets to the polite and reasonable commenter personality without really acknowledging the former except in a “sorry if you were offended, you misunderstood me” sort of way is itself pretty rude behavior. Chris owes Scott an apology on Twitter, to the same audience he broadcast his offense.
I can't even see the "If harshly" in Scott's original post. He is very careful to quote the original words and then present all possible interpretations and making it clear they are only his interpretations. He presented his case without a hint of irony or sneering.
Perhaps Scott doesn't like some of his commenters' attitude towards Kavanagh (which, including me, was somewhat harsh), but then again I scrolled some of Kavanagh's commenters on Twitter and they were all equally harsh on Scott and his readers.
Scott's in favor of niceness, politeness, and civilization! It's better to be that than not.
Niceness and politeness shouldn’t mean ignoring when people are being not nice and impolite to you and pointing it out.
I thought Scott’s original post was fine in that regard, and the walk backs seem needlessly meek.
As it is, Scott comes across as apologetic for reacting appropriately to Kavanagh’s impolite Twitter personality instead of his friendly and reasonable commenter personality. But the reasonable comments didn’t exist at the time Scott reacted, and Scott wouldn’t have even gotten the reasonable comments from Kavanagh if Scott had not reacted to the harsh Twitter posts.
The only good that came out of Kavanagh’s mean tweets came after Scott’s reaction, and was because of Scott’s reaction. Scott should be proud, not apologetic.
I don't think that anything in Scott's original post is incompatible with niceness, politeness and civilization. You would be hard pressed to write a nicer response to such inflammatory accusations. My concern is that Scott (and others) seem to have been distracted from the substance of the disagreement by the fact that Kavanagh's subsequent followups are *superficially* nicer. It seems to me anyone who wants improved discourse, should find Kavanagh's two faced behavior quite off-putting.
Are gay people smarter on average? I went searching, and found this
https://www.tandfonline.com/doi/abs/10.1300/J082v03n03_10?journalCode=wjhm20
And also Satoshi Kanazawa came up with some results around 2013. https://slate.com/human-interest/2013/09/are-gay-people-smarter-than-straight-people-or-do-they-just-work-harder.html (Satoshi has a Savanah Intelligence theory... and he seems a bit edgy.)
The reason I ask is I was out at my local tavern (in rural america) and I was wondering if there were less gay people out here. I went and talked with the one gay guy I know and his answer was yes, fewer gays than in the nearby city. So obviously this could just be people self selecting for where they feel more comfortable and embraced. But it might also be that more intelligent are selected to go to our best colleges, and then these people get good paying jobs in the city and more of these people (on average) are gay. To say that another way. Colleges have selected for intelligence and that has given us an intelligence divide between rural and urban areas. And along with that intelligence divide we got a gay/ straight divide.
Possible confounder: Is there a significant population of people who are either 1) gay and in the closet, or 2) "potentially gay" but have been socialised into being straight? If either or both of these are the case, I'd expect huge class/educational/locational differences in the distribution of those people, which I'd assume correlate negatively with intelligence. Caveat is that this is purely stereotype-based reasoning.
Yeah, AFAICT (at least in the US) it's a lot less stigmatized than in the past. So maybe we could gather data now. maybe even ACX survey data.
I suspect the ACX survey would be kind of useless; partly because it's a really weird subset of the population selected on some sort of personality variable that looks like it tracks with a certain kind of maleness that's hard to rigorously define but could confound with sexuality in basically any direction, but mostly because the intelligence stats are *cough* not the most rigorous...
(Not for lack of trying)
Re not rigorous IQ stats. Yeah more 'noise', from people exaggerating, but as long as there is no gay/straight bias in the exaggerations... then it's only noise and not a statistical bias.
You also have to look at the opposite direction of causation. If being gay is at all environmentally shaped, it could be that urban living brings it out in people. And even if we are really “born this way” as Lady Gaga says, we might be more likely to come out in a big city environment.
But I think it’s very possible that being minoritized in one way or another develops cognitive abilities that other people don’t develop. (WEB DuBois argues that black people develop “double consciousness” in that they have to learn the social ways of white people to some extent, as well as the social ways of black people, while white people don’t bother learning the ways of black people.)
Yeah I don't know how much is nurture. I'll have to ask my daughter, but I think all the gay people she knew in high school have moved into cities somewhere. So there is totally an acceptance part. I'm just suggesting there is also an intelligence part.
The puzzle about homosexuality is why it wasn't eliminated by evolution. Perhaps the answer is that there is some gene or set of genes that increase both intelligence and the chance of being homosexual.
Homosexuality is prevalent in the animal kingdom, so there's clearly some reason it doesn't decrease overall fitness. Something like 30% of males in some species exhibit homosexual behaviours!
My understanding is most homosexual activity in animals is with domesticated animals. But I don't have any links for that.
It's actually widespread in the animal kingdom:
https://en.wikipedia.org/wiki/Homosexual_behavior_in_animals
Humans don't appear to be different than other animals in this regard.
OK reading that wiki article more. Let me quote from the beginning
<quote> Although same-sex interactions involving genital contact have been reported in hundreds of animal species, they are routinely manifested in only a few, including humans.[5] Simon LeVay stated that "[a]lthough homosexual behavior is very common in the animal world, it seems to be very uncommon that individual animals have a long-lasting predisposition to engage in such behavior to the exclusion of heterosexual activities. Thus, a homosexual orientation, if one can speak of such thing in animals, seems to be a rarity."[6]
<end quote> And then a little later.
<quote> One species in which exclusive homosexual orientation occurs is the domesticated sheep (Ovis aries).[8][9] "About 10% of rams (males), refuse to mate with ewes (females) but do readily mate with other rams."[9]
<end quote>
There are some species that use sex socially, spotted hyenas and bonobos. The only exclusive homosexual mammal species are domesticated sheep, and humans. I think that is my point about humans may have self domesticated themselves.
Huh, OK, clearly there is more going on here than just humans. Thanks.
It’s not homosexuality per se that’s hard to explain, it’s exclusive homosexuality. Very hard to pass on genes that way!
Homosexuality as a social behavior could have plausible evolutionary benefits as long as the affected population still had enough hetero sex to have biological offspring.
I'm not sure why it would be more difficult to explain than say, congenital blindness or a preference non-reproductive sexual behaviour like sodomy. Biology is messy, and exclusive homosexuality doesn't need to be hereditary to show up over and over again.
Which isn't to say an explanation of the exact mechanism wouldn't be nice, I'm just saying the behaviour shouldn't be surprising given all of the other variation in biology we see that doesn't seem to increase reproductive fitness.
Oh oh, so "the goodness paradox" proposes that we self-domesticated ourselves to be less violent (at least within our tribe.) and more diversified sexuality, (and maybe intelligence, maybe all part of staying more youthful, playful.) are all spandrels that get dragged along ... (cause of whatever the evolutionary pathway is that selecting for less violence, aggressiveness, includes scrambling sex some and staying playful.)
One obvious answer is that any supposed evolutionary disadvantages are more than offset by the advantage of extra-fertile mothers, even if the cause of their increased fertility, such as extra hormones, may incidently result in offspring (of either sex) with an enhanced tendency to be gay.
Also, for the vast majority of human existence in primitive societies it must have been a positive advantage for male teenagers to go through a gay phase, both to better bond with each other then and in their later years and to divert them from dangerous competition with adult men for females. Even for older beta males competing with alpha males that would presumably also have been an advantage in reducing conflict.
That's a speculative hypothesis, at best. In no way "must" this be true.
Another factor to consider is that, historically, less than 50% of men fathered children at all.
Exactly. I guess that's what I meant by beta males, in this context