At the end of november/start of december here in the EU there was some talk of a brewing trade war with the US over the IRA subsidies. However i saw no US pundits (like noah smith) talk about this. Was this potential trade war news on your side of the ocean?
The American analyst attitude was that the EU's threats of a trade war was another in a long list of bad moves by the EU. It was not taken at all seriously and was, in several cases, basically mocked. The more Europe-friendly commentators mostly ignored it because it was problematic for their advocacy. And because that was the response among the policy experts it wasn't filtered down to the general population.
In general American politics are in a rather anti-trade mood. There's a consensus (which is objectively correct) that many allies have been cheating on their commitments and generally free riding. So the debate is mostly between isolationists who want to "even the field" by tough renegotiations against allied countries on one side. And on the other people who want to create a trade block of American allies. Basically, was free trade a mistake or was it only a mistake to not favor friendly democratic regimes? With very few free traders left and a general anti-China consensus.
After decades of cheating on their commitments, not moving on Ukraine or Asia, and having their own subsidy/barrier programs the EU has very little capital for this kind of demand. So them barging in and complaining was largely seen as a diplomatic blunder by Europe's friends. As if the EU was completely deaf to the political climate in the United States. Meanwhile the Japanese and British (and a few of their friends like Australia) had more success in advocating for themselves.
But yeah, tl;dr is that the noises about a trade war were not taken seriously.
You'll never regret spending a day at the Met. For something a bit more off the beaten path, the Cloisters is lovely, and in a less visited part of the city.
I really like the MOMA, the Whitney also has a great collection and is near the High Line. Might be a bit chilly for a walk next week but if you're bundled up it's a great vantage to see the city.
The Tenement Museum is also really interesting, particularly if you have ancestors who could have been potential residents
Girard taught Peter Thiel at Stanford and Thiel was influenced by Girard's ideas. A lot of the promoters of Girard's ideas seem to have links to Thiel.
The originality of his ideas seems much less important than their truth.
Anyway René Girard published his main original works in the 50's or 60's. So they may sound unoriginal to you only because they are, as a famous Slate Star Codex article said, in the water.
The notion of "desiring what others desire" seemed an obvious concept to the advertising industry by at least the beginning of the 20th century. The fancy word "mimesis" doesn't add anything to that old insight.
In the last few months, I’ve become increasingly aware of my intellectual limitations. I’m finding that I can’t remember facts from the books I’ve read years ago and that I’m unable to clearly articulate arguments that I once knew back and forth. It's frustrating, but totally predictable.
I read a lot for work and for pleasure (largely psychology, philosophy, history, economics, biography, business, etc.), listen to audiobooks and podcasts, read blogs, debate ideas with friends and colleagues, and write frequently (mostly privately). But I’ve put zero effort into optimizing my information diet or approach to information ingestion.
As work / life become more demanding, I don’t expect to have enough time to continue my ‘shotgun’ approach to learning. I’ve read neuroscience / psychology books that offer a model for how the brain works, and I’ve also read about 'hacks' to improve memory (e.g., the mind palace, writing / journaling, better diet / exercise) and improve cognitive performance (e.g., reduce mental clutter, meditate, learn frameworks / mental models). But many these individual pieces of advice seem contradictory or incomplete (i.e., the mind palace doesn’t seem to make people geniuses).
Does anyone know of a comprehensive model they use as a basis for optimizing information ingestion, improving information retention, and improving cognitive performance?
Try to let go of the idea that you're not smart enough and need some kind of brain overhaul. You are as smart as you are -- go find things you're good at and interested in and fill your life up with them. A lot of the stuff about how to improve cognitive performance are written by people who think life is College Bowl or a giant LSAT test-- he or she who retains the most factoids wins. That's horseshit. You don't have to micromanage your brain into remembering and using stuff you've read with interest. It already knows how to do that. Trying to do mental mechanics to improve that process is like trying to remember to blink often enough -- it doesn't help, and will just drive you crazy. Here are the 2 things I think make a genuine difference in how well you absorb new info:
-How deeply interested you are. If you are excited and fascinated about something or read a book that makes you feel that way about the subject you will absorb a lot of what's in it and make it yours. You may not remember factoids and numbers, but you will retain the gist. You can always come back and look up the factoids and numbers if you need them. If you read something that kind of bores you but you think you *should* know that stuff, it won't stick to your mind. The ways you are interested in the topic is like hooks that grab onto the info. If you don't have much interest you don't have many hooks.
-What you do with it as you learn it. The more you do with the info during the learning process, the more deeply you process it and the more you retain what's important. You know how textbooks in history have thought questions at the end of chapters -- stuff like, what do you think would have happened differently if Lincoln had died just as the Civil War began? Well, answering schoolmarmish questions is a bit irritating, but the idea behind it is good: Get the reader to actually *do* something with what he just read. You could make a diagram of the part that's of most interest to you, go to a museum to see something the book talked about, do an experiment to try out something in the book, talk it over with someone, use it in your work, etc. Or you could read books about one subject with a goal in mind -- career change, prepare to get the most out of a trip, learn how to do something you've always wanted to do. Then you will read with deeper interest because you are going to put the info to use.
Thanks for the kind words! To be clear, I'm not particularly concerned that I'm not smart enough (generally) - I'm just hitting a recall limit that I haven't felt before, which is disconcerting!
Re. point 1: I think my biggest problem is being too interested in too much! It's all genuinely fascinating, and while the gist definitely sticks, I'm becoming less convinced that getting the gist of historical periods and biographies is useful without additional depth (i.e., I can remember the outlines of the story, but not the examples / inflection points that are useful metaphors for real life decision making).
Re. point 2: a lot of this stuff isn't really applicable to my work / daily life, so I don't really use it (outside of conversation). I just think it's interesting background knowledge to have that makes me more aware of the world and better able to answer interesting questions (like the one you outlined). I've taken the 'read books about one subject with a goal in mind' and that definitely helps.
Maybe a re-framing would be helpful. I generally think of practical intelligence as something like: total information stored * ability to recall information * ability to use disparate pieces of information as 'lenses' / 'frames' / metaphors * general problem solving skills (asking good questions, information finding, etc.) * function-specific skills. I feel like I'm approaching my natural limit on the 'total information stored' and 'ability to recall information' dimensions, which naturally affects the third dimension. I'm certainly overthinking this... but figured if anyone would have thought about this, it'd be the people here.
I think the real answer is that nobody is particularly good at those things. People forget stuff all the time. If you use information you won’t forget it; if you don’t use it you will forget it.
If you really want to remember the contents of a book, take notes on the book and reread the notes periodically. The other thing you can do is read another book on the same topic. You will remember the first book while reading the second.
Good points! I do both, and they are very helpful practices.
Was just curious on if anyone had a more comprehensive / structured approach to knowledge intake, storage, and recall. Probably could have worded it better.
Take heart! Age and experience tends to solve this problem without your even trying to do anything in particular. You learn what is important and what is not, and how to access or refind what is important faster and more directly. And the ability to prioritize and streamline access climbs so rapidly with age and experience that it outpaces the decline in sheer cognitive ability which actually starts when you're like 10 and continues steadily thereafter. Most people only seem to start losing the overall fight in their mid to late 60s, and often not until later.
My brain is unquestionably significantly slower than those of work colleagues who are 20 years younger, but I can still usually find the right answer to problems faster than they can, because I have better notions of where to start, can detect dead ends earlier, and can sense promising avenues of investigation faster. I walk while they run, but I tend to take shorter paths to the desired destination.
The only generic advice I can give about optimizing this growth is (1) analyze where you go right at least as much as where you go wrong. People tend to brood over bad decisions, which up to a point can be helpful in avoiding future mistakes, but they often tend to just take good decisions for granted and not subject them to the same analysis -- which deprives them of some potentially equally valuable insights. Why did this decision go right? Was it luck, or did I do something worth repeating? Et cetera. And (2) don't be afraid to try a lot of stuff, and look stupid. Sometimes the status quo is the status quo for no better reason than that everyone assumes there must be a reason for it, and nobody has the courage or inspiration to just try something different. Even if you repeatedly bonk your head on the fact that there *was* a reason for the status quo, by experimenting this way you can start to develop an intuition for when The Way Things Are is due to good reasons and when it's from dumb inertia.
Here is a very key question for epidemiologists and medical researchers to figure out.
What do researchers on Long Covid need to see in the covid and other health data, in order to figure out that the covid virus caused someone's long term illness rather than that they'd have had it even without having ever caught covid?
I have a friend who had mild covid and then a mild heart attack in 4 months.
We now know multiple sclerosis, a serious illness, is triggered by a virus. Does covid trigger serious things? How will they answer this question?
Is this sort of thing being studied? I'm reading that 1 on 5 covid patients have long covid. What does this mean? This is a huge percentage.
At a population level, one could also try and just track the fraction of people with chronic illness and see how much it has increased. IIRC, Covid hasn't increased it much but the baseline level is shockingly high, something like 1 in 8 people.
The one in five number is almost surely fake. At least if long Covid means severe debilitating sequelae. As a Fermi estimate, approximately everyone you know has probably had Covid by now. Do one in five of the people you know seem crippled? Doesn’t pass the sniff test.
Well to be fair most of the time the 1 in 5 stat is quoted, there's then info about what fraction of that 20% are mildly, moderately, and severely impaired i.e. "crippled," and the fraction deemed to be severely impaired is not real large -- can't remember the exact numbers -- maybe something like 15%. But I do agree that the 20% number does not pass the sniff test. There are at least 50 people I know well enough to be confident that if they'd had Long Covid symptoms I would know. Sat down and thought through them all recently, and 47 of the 50 had no symptoms lingering as long as 3 months. One 30 year old woman who had covid before vaccine was available had something that would qualify as moderate impairment for about a year after recovery. Two women aged about 60 who had covid about 6 months ago think they may have less energy now, but aren't sure. And that's it. Of course most of the people I know get good health care and have gotten vaxed and boosted. A fair number took Paxlovid when they were diagnosed. I'm sure all these things improve a person's chances of not developing a post-viral syndrome. Would not surprise me at all to learn that poor & unvaxed folks getting mediocre health care would have more Long Covid -- still, 20% is hard to believe.
A lot of the creepy stuff you read about Covid applies just as well to other viruses, but the people talking up the creepy stuff don't know that or neglect to mention it. For instance on medical twitter someone recently posted about how autopsies of people who died of covid find the virus in many parts of the body, not just in the respiratory tract. I looked up measles and polio, and read that they also spread through many parts of the body -- liver, kidneys, nervous system, gasto. tract etc. Likewise, many viruses are known to trigger problems later in life in a fraction of those who'd had them -- cervical herpes does, and mono, and polio; so did the 1918 flu. Post-viral syndromes much like what's called Long Covid are also not rare after other viruses. I know someone right now who's got exhaustion, malaise, POTS and exercise intolerance, and docs think it's the result of her having had mono last year. I myself had a post-viral syndrome for 3 years after a moderate case of the flu 20 years ago. I had exercise intolerance, exhaustion, hypersomnia, migraines, body aches like the kind you get from the flu itself, several joints so sore I gasped if I bumped them even slightly, and a low fever that came and went. I dropped as many responsibilities as I could and dragged myself through those years. It faded away on its own eventually.
19% of the 40% of people who reported having Covid have some long Covid symptoms according to a specific survey by the NCHS and and the Census Bureau. This was defined as having potentially Covid related symptoms lasting 3 or more months after first contracting Covid which they did not have prior to contracting Covid.
This would have been easy for you to look up yourself so it seems weird to give the comment you gave based off a half-remembered statement by some person you don't know without first looking yourself to see what the official sources said.
Apparently some people just take a long time recovering from infections that other people shrug off after a few weeks, and some people suffer chronic symptoms that aren't associated with any known infection but may be erroneously ascribed to whatever infection is making headlines when the symptoms arise. Covid doesn't seem to be particularly special in this regard, except for making more headlines.
This would have been easy for you to look up yourself so it seems weird to give the comment you gave based off a half-remembered statement by some person you don't know without first looking yourself to see what the official sources said.
This is Motte and Bailey then. The Motte is `any symptoms at all.' The Bailey is `severe debilitating sequelae, which get trotted out as the central scary example of long COVID.' And by now I think the number of people who have had COVID is presumably North of 90%.
A couple of people on here have posted about having kids with OCD or Tourette’s. Wanted to let anyone interested know that registration is now open for the OCD Foundation’ s online camp for kids age 6+ with OCD and their families. It takes place during February break week. I know some of the people that run it, and think it will probably be quite good. More info about registration here:
Shortly after reading the latest Twitter files expose I was greatly surprised to notice that my kid's tablet was playing a near-perfect accompaniment to it:
What percentage of voting Americans do you think you could correctly identify as generally voting Democratic or Republican if you were to merely see what they look like?
By "look like" I include what they look like in clothes they typically wear, but you wouldn't have information about the vehicle they drive or whatever transportation they might take or where they are located. You can see them in motion, witness their body language and facial gestures.
In Australia, I suspect I would struggle to do better than 60%, maybe 5% better if I get to see how they dress on a workday instead of a weekend. It scares me that this is so much easier over there.
Oh, over on TheMotte new site, someone did an experiment like this with Finnish politicians: could you identify the right wing from the left wing politicians? And the majority of us who tried it did as well as, or worse than, chance.
There are definitely 'cues' or little signals that are culturally specific that we tend to pick up on to identify people like this, and when it's a different cultural context and we don't recognise the cues, we do poorly.
Guessing the politics of politicians would be, I think, much harder than guessing the politics of voters. Politicians care much less about policy than do voters. I mean, I couldn't possibly tell if Donald Trump is a Democrat or a Republican.
Interesting. I took both quizzes and did barely better than chance.
A couple of rules that seem to apply cross-culturally though: any woman who is wearing excessive patterns or colours (for this sort of formal photo) is on the left, and a young man with an excessively neat haircut is on the right.
With the middle-aged or older men, though, my intuitions were all backwards.
Why? Food people eat is as culturally significant as clothing or grooming. Race is strongly correlated with partisanship, but that wouldn't help you get up to 85% since about 3/4 of Americans are white.
If you wanted to, you could try just looking up group photos of congressional interns or minor government officials. They're not average voters, but could be similar to what you want. When I searched for "congressional intern" on linkedin, I wasn't able to always place whether they were in a republican or democrat office.
I've looked inside very few fridges in my life, but I've seen a million faces and known the partisanship of thousands of them. My approach to guessing someone's partisanship isn't analytical but holistic. It's like gaydar. There are hundreds of tiny correlations that one can unconsciously pick up on from someone else's facial expressions and body language -- and then become conscious of the overall impression made by paying close attention to one's gut feelings. Not a perfect science, of course. Not science at all, in fact, but learned intuition about people.
I have no learned intuition about fridges.
I don't claim to be able to intuit much about people, but partisanship is something I've spent much time observing. And I've spent about as much time around Democrats as I have around Republicans.
Yeh? I’m not American but I imagine telling a guy in a suit from a guy in a suit is not possible. You didn’t mention race though, and that might be a factor.
I'd guess that those proxies, refined to tell you whether, say, a 60-year-old looking white woman or a 22-year-old looking white man is more likely to vote R or D (just going by demographic polling data) gets you correct about 63% of the time.
My claim is that I can Republicandar and Democratdar (a similar mental tool as gaydar) my way up to 85% by looking at facial expressions, body language, and clothing that one chooses to wear.
Of course you can believe that or not, and I haven't tested this claim myself. I'm merely reporting my confidence level. The intention of my OP was to get others to report their confidence levels, although nobody else seems to have been interested in that.
Fourteen years ago, on a LessWrong post about epistemic prisoner's dilemmas (https://bit.ly/3C4rNrT), James Miller made an interesting suggestion:
"Pretend we have two people, a Republican and a Democrat, who can each donate to three charities: The Republican party, the Democratic Party, and a non-political charity.
"Both people's utility is increasing in the amount of resources that their party and the non-political charity gets. And, as you would expect, the Republican is made worse off the more the Democratic party gets and the Democrat is made worse off the more the Republican party gets.
"The two people would benefit from an agreement in which they each agreed to give a higher percentage of their charitable dollars to the non-political charity than the would have absent this agreement.
"To the best of my knowledge, such agreements are never made."
This seems like a terrific idea to me; many people agree that way too much money is spent on presidential elections and would prefer it otherwise (certainly I would!). I've been thinking today about whether there's a reasonable way to implement this that scales better than the honor system. Here's what I've come up with:
The simple case: Danny the Democrat would like to donate $1000 to the Red Cross, but reluctantly sends it to the Democratic party because after all you can't let those lousy Republicans win. Ronnie the Republican would rather send money to the United Way, but similarly sends it to the Republican party because god forbid those idiot Democrats be in charge. I, a credible and neutral third party organization, offer to help. I pair Danny and Ronnie, and they each send me $1000. Once I receive funds from both, I donate them to the Red Cross and the United Way respectively, and there are a few less ads for each side this season. If Danny doesn't follow through, I send Ronnie's money to the Republicans, and of course vice versa.
Scalable extension: there's no reason why Danny and Ronnie need to know about each other; they can both just send me money and know that I'll match them up with equivalent donations on the other side. If people from one party send more money, the surplus can fall back to being donated to the desired party.
What are the flaws here? And does anyone know of a credible organization that might be interested in doing this? I'd much prefer not to try to take it on myself because I've already got too much on my plate, and I don't have an easy way to establish myself as a trustworthy third party (whereas an already-known organization hopefully would).
Alternately, is any organization already doing this? That would be great, because then I could just send them money and not have any need to put more time into it :)
On thinking this over, I think it's a mistake to only divide 'charities' into political and non-political.
I think in order for something like this to work, you need to think of spending on 'charities' / non-profits in the following categories:
1. Explicitly political funding: money for political parties or candidates
2. Funding on lobbying the government for politicized causes
3. Funding on 'raising awareness' for politicized causes from the general public, presumably at least in part to influence voting patterns
4. Funding for politicized causes that is directly spent on things the cause advocates
5. Funding on lobbying the government for non-political causes
6. Funding on 'raising awareness' for non-political causes
7. Funding for non political causes that is directly spent on the cause.
For example, a big pro-life organization likely spends some money on sending people to talk to Republican politicians to pass pro-life laws (type 2), spends some money on passing out pro-life literature (type 3), and spends some money on helping women who choose not to get an abortion (type 4), and a corresponding pro-choice organization likely has all three types of spending as well. Ronnie the Republican and Danny the Democrat would both probably rather spend their charity on actually helping people over spending it on politics, but that includes both type 4 and type 7 spending. Eddie the Effective Altruist (an Independent disillusioned with the current system) is not going to want type 5 or 6 spending as much as type 7.
The other problem is finding a politically neutral charity. Even a massive charity that predominates support for politically neutral causes likely has a leaning one way or the other. Both the United Way and March of Dimes have appeals to support 'equity' on their home pages, which is a sign of a significant investment in type 2/3 charitable activities.
> I think in order for something like this to work, you need to think of spending on 'charities' / non-profits in the following categories
That's a good point. I worry about overcomplicating it, so it seems like it might make sense to have a list of approved charities that can be chosen instead of Rep/Dem parties, ones that few people would see as political.
The donation would need to be tax deductible and corporate matchable. I wouldn't trust a new org, as you mentioned, and I'm not sure I can identify one I would trust. What org is trustworthy to Ds and Rs?
(Caveat: I only donate to local causes run by people I can talk to and that undertake efforts I can see, and I have never donated to a political party, because I think they are not worthy. So I'm not your target demographic, I'm just pointing out issues)
> The donation would need to be tax deductible and corporate matchable. I wouldn't trust a new org, as you mentioned, and I'm not sure I can identify one I would trust. What org is trustworthy to Ds and Rs?
I'd hope it could be set up in a way where the organization is treated as a pure pass-through entity, and tax deductible / matchable status would depend on the specified charity. I don't know for a fact that that's possible, though; it would require some expert knowledge.
As far as trustworthiness to both parties -- it does seem like there are *some* charities, at least, eg the Red Cross. That aspect does seem like a challenge, though; I'd think the ideal would be for some existing respected, neutral organization to administer it but it could be hard to find one that would be willing.
I think that what you’re missing is that Dannie and Ronnie giving money to their respective parties is what perpetuates the two-party system. If neither of them gave, then maybe Ivan the Independent would have a good shot at elected office.
Basically, if both sides give huge amounts of money, it becomes an even 50/50 election but (to some) this might be better than a 20/20/20/20/20 election with an extra moderate and pair of extremists
"I think that what you’re missing is that Dannie and Ronnie giving money to their respective parties is what perpetuates the two-party system."
Third-party candidates getting a bit more attention seems to me like a feature rather than a bug, although I'm sure many wouldn't agree. I think in practice it's highly unlikely that a system like this would retire anywhere near enough money from the two major parties to make that a significant factor, given that major-party spending absolutely dwarfs spending by independents. If and when this system got big enough that people started worrying about that, donations might diminish (and at that point each marginal dollar given to the parties makes a much bigger difference, so giving to the parties is more attractive; I think there's a natural equilibrium there).
I personally would prefer more third party candidates, but I’m an independent. My point was mainly that I think that a Republican who gave equal amounts to the Republicans and Democrats would still be increasing their odds of having their candidate elected
> a Republican who gave equal amounts to the Republicans and Democrats would still be increasing their odds of having their candidate elected
In principle yes, but I think in practice it wouldn't matter much (at least to the extent that we're talking about the possibility of independents being elected to the US presidency), at least not unless something like this got quite large.
From the post: "...campaign finance law is complicated, plus the political parties won’t like you (you’re taking their money) and will very likely sue you. Dr. Zolt said that these lawsuits are dangerous despite an FEC ruling saying that Repledge was legal, because there are various ways to interpret the ruling."
Ouch. That's an aspect I hadn't considered at all.
I believe in revealed preference. When I observe people donating a huge pile of money to political candidates, but decrying the amount of money in politics, my first hypothesis is that people like donating money to candidates they consider good, and dislike other people donating money to candidates they consider bad. That is, their actions are honest, and their speech less so. That does tend to be the human tendency, after all.
People hate using revealed preference against them. Drives them nuts. I’m meeting my leftish (albeit not too woke) brother in a day or two, and I’m probably not going to mention again that his revealed preferences are living in a very white middle class English village, interacting with white British people exclusively, and sending his children to an, admittedly comparatively cheap, private school because he’s convinced he’s a socialist who likes multiculturalism. Those debates sound a bit mean over the winter break.
Of course what applies to him applies to a larger class of people and maybe that’s the way to argue.
Well, if your brother is adopting lefty attitudes out of upper middle class guilt at living a life that is significantly more privileged than most of the planet, it would definitely be a sore spot to poke. I have family who are in that situation, and I keep my conclusions to myself, because family is way more important than ideology.
Mostly I find the concept of revealed preference useful when interpreting my own actions. I am as susceptible to bullshitting myself as the next man, and it helps to cast a look back at my own actions and say "Well, what does what you actually did suggest about your priorities, Carl?" That way I don't need to fight endless internal battles over "Why don't I do X which I want to do?" because I can conclude the correct answer is "Because I don't really want to do X" and can move on to the more fruitful question of "Why do I THINK I want to do X even though I really don't?" That way lies enlightenment.
It's hard to apply it to other people, because I rarely know enough about all their choices to draw conclusions. So like most people I expect I realy on some half-assed combination of observing what they do (and applying revealed preference) and taking what they say at face value.
The problem with "revealed preferences" is when the reasoning is applied to imaginary options. Suppose that you have to choose between A and B, and those are the only options you have at the moment. You choose A.
It is fair to conclude that you prefer A to B.
It is not fair to conclude that you prefer A to C.
And yet this is what many people do. "Of course he prefers A to C; he chose A. Now he is going to hypocritically deny his revealed preference."
I agree with you in a general way, but you should add in the caveat that if you have a choice between A and B you usually have at least one more choice, which is neither A nor B. So if someone affirmatively chooses A, then it *is* reasonable to conclude he prefers A over neither A nor B. Since C is a subset of "neither A nor B" there is some basis for concluding he prefers A over C, albeit much more weakly than if C were an explicit choice at the time.
To be more concrete: if a man does not save his money or devote any effort for the purpose of supporting a future family, even though he is still a bachelor, I can reasonably draw the conclusion that supporting a family is lower on his list of priorities, compared to the bachelor who does do those things. The fact that the choice is not immediately available isn't fully determinative. (The case is stronger if C comes as a surprise, is not forseeable.)
Costs are also a part of decision. For example, if you asked me whether I prefer apples or oranges, I would say "oranges", but I would imagine that I simply have a choice between getting 1 apple and getting 1 orange. However, if I am at a shop where 1 apple costs $1, and 1 orange costs $100, I will take the apple instead.
If someone later reports this as "Viliam says that he prefers oranges, but actually he buys apples", without mentioning the difference in costs, that would seem unfair. Like, yes, there is a situation where I choose an apple over an orange, so my preference is only relative. But it would be fair to include this situation in the description of my choice.
Sometimes the cost is not directly money. Imagine that I am doing 99% of my shopping in a supermarket, which does not sell oranges. So my choices are not just "apples" vs "oranges", but "apples" vs "oranges, but I have to walk an extra mile to another shop". And yes, my revealed preference may be to save the extra time and buy an apple instead, but that doesn't make me a hypocrite -- it simply means that I prefer oranges to apples, but I prefer my free time even more.
A realistic example in my case would be my work time. I repeatedly read on internet statements like "100 years ago, people assumed we would only work 4 hours a day, or only 3 days a week, but the *revealed preferences* show that people actually prefer to work 5 days × 8 hours". And I want to scream that a very important part of the context is missing here: the jobs offering part-time are *rare*, and they often offer much *lower* hourly wage. My actual choice is not between "full time" and "part time", but between "full time" and "part time, but also at half the hourly wage, so only 1/4 total income". I would be happy to only get 1/2 of my current income and have more free time, but 1/4 is not enough.
So my actual preferences are "[50% wage @ 50% time] > [100% wage @ 100% time] > [25% wage @ 50% time]", and I hate when this gets simply reported as "he says [50% time] > [100% time], but his revealed preference is [100% time] > [50% time]". It misses the actual thing that my decision depends on. Like, getting paid is the entire point of having a job, so how can we abstract from *that*?
I think what you're saying in a general way is that preferences are often hard to follow because choices are highly linked. In the case of your apples and oranges, the choice of which to buy involves your preference for apples v. oranges *but also* your preference for having fruit v. having dollars, because we can't separate the act of choosing one fruit from the act of trading money for fruit. So it's tricky to work backward from what you do to your preference.
I have no disagreement at all with that. All I'm arguing is that if we in fact observe you buying oranges all the time, even though they are 100x as expensive as apples, and then we *ask* you about this, and you totally deny that you value oranges 100x more than apples (in dollars) -- our leading hypothesis should be that you are bullshitting us and/or yourself, because that's the simpler hypothesis (given the high human tendency to bullshit) than that there is some strange complicated causal chain that leads you to buy oranges all the time even though they are 100x more expensive and you don't value them more.
It's a more complicated chain or thought, but you may be revealing a preference that's different than you think. When articles talked about working 4 hours a day or only a few days a week, they were correctly predicting the future. Someone could work less than 20 hours a week now and have a standard of living that exceeds 40+ hours at some previous productivity/wage. What went up in tandem with productivity is expected standard of living. We simply expect to have more, and therefore need to work more hours in order to achieve it.
Although difficult to compare for some prices (some things didn't exist that are normal now, for instance, while housing and college are more expensive and food and clothing are far less), you may in fact be able to achieve a better than 1920s lifestyle today, with your 25% wage at 50% time.
Alternatively, using revealed preference can just be (and is usually just that most of the time) a dumb way to assign an affinity for the status quo that is simply not there. It's not the "Revealed Preference" of the prisoner to defect, it's not the "Revealed Preference" of people to litter the street.
Reveled Preferences as a way of thinking betrays a chlidish understanding of agency that goes something like "Every person's actions is a reliable metric for what they want". This breaks down in :
(1) Multi-Agent systems, the prisoner's dillema is in full effect. Your actions are not necessarily indicators of any desire or affinity, but simply the loss-minimizing choice against defectors, regardless of what you yourself want. Any time a bunch of agents are locked in a zero-ish sum game with non-reliable or non-existent communication (i.e. most human groups), "Revealed Preferences" is incoherent.
(2) Even in settings where only your mind count, this singularity is an illusion. Your mind itself is a Society of its own (https://en.wikipedia.org/wiki/Society_of_Mind), your brain is the Multi-Agent system from (1). The "preferences" expressed by your actions doesn't have to match the actual "preferences" we all mean when we say the word. Because your brain is a thousand different module, not all of them are working towards the same goal all of the time. It's not the "Revealed Preferences" of people that they are addicted to heroin, or Facebook, it's that these things hijack certain modules in their brain and makes them work against the rest. It's a preference alright, but not of the person.
My politics, by the way, is a linear combination where "whatever enrages the wokes" has a *significant* weight, so I don't think the above is me defending your brother's choices or weighing in on the any specific issue at hand in all of this thread. The above is just why "Revealed Preferences" is a dumb and incoherent intuition pump in general, there is perhaps a very small kernel of truth inside (putting your money where your mouth is and all that), but most uses take it far far beyond what can be sustained.
I don't think that this is analysis is completely fair. Decrying money in politics is usually a complaint about the system; operating in a system you dislike is just part of life.
I don't think that income taxes are a good policy. I still pay mine, though, as I don't like having my accounts seized or being imprisoned. That's not a case where my revealed preference is pro-income-tax.
I agree with BeingABat on this. Another example in addition to theirs is nuclear de-escalation. I can prefer mutual de-escalation to escalation, but if mutual de-escalation won't happen, prefer country X's continued escalation to unilateral de-escalation on X's part. That doesn't mean that my revealed preference is for escalation; it just means I'm taking the best of the options that are actually on the table.
Yeah but you're compelled to pay your income tax, all forms of protest are extremely expensive. It's easy as pie to not contribute money to political campaigns, the barriers to protest are zilch. When there are no barriers to protest, I interpret the lack of protest -- indeed, active participation -- as indicating approval and a desire to participate.
Sure, I guess someone who is protesting about money in politics could be imagining some completely different system, in which collective decisions are made without the investment of large amounts of human effort (which is what the money represents) devoted to broadcasting the argument and trying to persuade. How that works, I would not know, but I would guess, based on the nature of human beings, that what is really meant is just a system in which everyone else pretty much agrees with me, and we all know it by the fact that we all like the same flavor of ice cream and opening statement of a political speech, so...there's not much need to argue about stuff, we'll just vote The Right Way by instinct and natural tendency all the time.
But this is just Peter Pan fantasy, not a plausible alternative to republicanism in a diverse population.
A desire to participate in a system given that system's existence isn't the same as a desire for that system to exist.
It seems you didn't like my analogy, so I'll offer another one. My employer has a system for evaluation and promotion that I do not think is all that good. I think it should be changed to be a better one, more like other firms I've worked for -- I think this could be better for the company and better for the people who are more valuable. However, I still "play the game" -- I try to make my actions and my reports' tasks align well with this system, rather than just trying to do the best job to execute on our team's charter. Yes, I desire to participate, but only because a system I do not like is in place.
> Sure, I guess someone who is protesting about money in politics could be imagining some completely different system, in which...
I think you're not being very fair minded here. Campaign finance / political finance is heavily regulated. It might not be good policy to regulate it more heavily, but it isn't fantastical. When people complain about money in politics they often mean things like "have a cap on donations without loopholes" or "switch to public funding, looking more like Norway or some other major countries [or perhaps even moreso]". In the US, these sorts of folks tend to like things like McCain-Feingold and wish it went further, and tend to dislike things like super-PACs and the loan tricks that allow effectively exceeding donation caps.
Complaining about "money in politics" is probably not usually a complaint that a single dollar is ever spent as a part of politics. We have all sorts of short phrases that mean something other than a completely literal, contextless interpretation of their words.
Your further analogy suffers from the same weakness, which is that the immediate cost to not participate is high. You need an analogy where the immediate cost (to yourself and your own life) to not participate is zero. For example, donations to charity. There is zero cost *not* to donate, so we interpret the existence of well-funded charities as representing a positive wish to participate by its donors. Same idea.
If we found that certain people donated to Planned Parenthood/Birthright.com *and also* complained that there was too much money going to BR/PP then we would draw the same conclusion that I'm drawing here about political donations -- that people put their money where their heart really is[1], but frequently speak with forked tongue or ambiguity about their values. "I like donating money to Cause X, but I really hate that others donate to Cause Anti-X" turns into "Gee, too many people are donating to bad causes, like...oh I dunno...Cause Anti-X maybe..."
Yes, I get that you don't like this conclusion, but so far you've offered little in the way of evidence that it's false. Arguing that maybe people are really being sincere and direct when they complain about "money in politics" and just feel compelled to donate to offset other donations seems insufficient. It's not completely unreasonable, because it's clear political donation pitch-making *does* trade on potential donors' fear of other (opposing) donations. ("Candidate We Hate has raised $x million already! You need to send us $5 right away so Candidate We Love isn't at a disadvantage...!") But this is not all or even most of political fundraising. Most of it is along the lines of "We want to accomplish X and Y to further our glorious future, won't you send $5 to bring Heaven on Earth about?" That's a pretty clear appeal to the donor's wish to influence the future, and the fact that these appeals are successful suggests that's what drives people to write checks.
---------------------------
[1] Which is the major argument in favor of prediction markets, right?
The Supreme Court has put restrictions on congress’s ability to restrict certain political contributions. Lots of people would prefer that political contributions were restricted yet make political contributions. It isn’t a revealed preference thing. There isn’t anything voters can do about getting money out of politics in the Short term.
Further, Getting money out of politics is not a politically neutral act. It means that the system will be less responsive to people and companies with money to give and more responsive to everyone else.
#2 may be true sometimes, but is often not. Bernie Sanders famously received most of his donations (apparently above 70%) from small donors. Small donors in this case are people giving less than $200. Even larger donors of $200-$5,000 may not be giving enough to individually influence (or even gain the attention of) statewide or nationwide candidates, or candidates from particularly big local races such as NYC mayor. Because of campaign finance limits, bundlers (people who solicit and collect donations from lots of individuals, to collectively far exceed the individual limits) and people donating large amounts to PACs are the real influencers. These bundled/PAC'd donations are not a majority of the donations made, even in cases where they are the majority of money given. Lots of individual small donors put in $5 or whatever with no hope of ever getting recognized for it by a politician.
1. Sure, absolutely! But an extra $500 going to each party simultaneously probably won't make much difference to that; both sides will buy some extra ads and (in expectation) it'll mostly be a wash.
2. People who are giving to curry favor with a politician would certainly not be interested in this. But I think there are plenty of political donors who aren't.
Sorry to question the premise, but I think the decision of who to elect is worth spending money on. The part that’s annoying is that all the marketing/messaging is so bad, and often deceptive. That’s what I’d like to fix.
By all means, question the premise! If it's a bad idea I'd rather learn that quickly. I can certainly see the argument that political campaigns are worth spending money on. But is it worth having a spending arms race on, where both sides spend an extra, say, $40 million just canceling each other's spending out?
You can argue that it doesn't cancel out because one side spends way more effectively than the other, and someone who believes that that's a large effect probably wouldn't be a good candidate to give to something like this. But personally I'm not convinced that's true.
Here is one flaw. Participants have no way to know that they are actually reducing their counterparty's political donations in the counterfactual universe. What if Danny had planned to donate $1000 to Red Cross all along, and is only using your service to stop you from donating "for free"?
I think that's the toughest part for sure. The main countervailing force here is that any money people send to you will by default be sent to the political party if not canceled out, so they have to at least be willing to take a chance on their donation being used for that purpose. A secondary force is the honor system, as with existing vote exchange systems.
My first thought was people agreeing to swaps on a pure honor system. This revised version tries to at least reduce the motivation to cheat a bit. Do you have any other ideas for how to incentivize against that behavior? They'd be very welcome!
And although there's such deep distrust between members of the two political parties, my hope is that the fact that it's ultimately about giving to charity might create a presumption of goodwill. 'They may be dumb and wrongheaded, but at least they're probably decent people since they're just trying to give money to <charity x>.' I think it'd be a lot less feasible if it were a context where people are profiting.
Second thought is that this point -- "the Republican is made worse off the more the Democratic party gets and the Democrat is made worse off the more the Republican party gets" -- is not necessarily symmetrical in real life at a given moment. One or the other of these voters may view the next marginal dollar having more value to one party than the other at a particular moment in the political wars, and that view could well be rational.
There have been some famous examples of state battles that were funded by national money, like Beto in Texas, which "wasted" lots of national dollars when he lost.
Agreed, I think some people would believe that. On the other hand, for me personally, the appeal of de-escalating the campaign spending arms race would be worth accepting some asymmetry, especially because it's not usually obvious which side is making better use of marginal dollars. I suspect and hope that a meaningful percentage of political donors (> 10%, say, and my best guess would be closer to 50%) would feel the same. Does your intuition disagree on that?
While I am not a political donor, I know many people who are (some of them to each major party or its supporting surrogate orgs). My sense is that they don't really factor in how well their preferred party makes better use of the marginal dollars.
(Well not in any decisionmaking way anyway -- it is definitely true that lots of strong partisans of each American party rant about how stupid/incompetent their own side's political pros are and about how the other side is "running rings around us" and etc. But I've never noticed that feeling actually changing anyone's donor behavior.)
Stepping back to look at the big picture, in the current system and rules (post "Citizens United") there is a mild correlation between which major party out-raises the other and how our national federal elections go. OpenSecrets.org carefully tracks all campaign-related spending (not just by the actual campaign/parties but also the PACs and the "dark money" orgs). Here is which side spent the most and which side ended up being happier with the overall result:
2022: GOP side spent more, Dem side happier about the results
2020: Dem side spent much more, Dem side happier about the results
2018: Dem side spent more, Dem side much happier about the results
2016: GOP side spent slightly more, GOP side much happier about the results
2014: GOP side spent slightly more, GOP side happier about the results
2012: GOP side spent more, Dem side slightly happier about the results
[Note that the presidential-year summaries aren't just about the presidential races but also the House and Senate.]
I was mildly disappointed, having had expectations raised by reviewers and a strong interest in the period. It's three years ago but the pacing struck me as odd with some things laboured and others skipped over. The made up link to Orwell seemed more confusing than helpful.
I guess the intended audience are people less familiar with history, either Holodomor, or the relations between Soviet Union and friendly Western journalists. Otherwise it is... what you would expect.
I think my reply to you was unduly curmudgeonly. It is an interesting theme, one which might stimulate wider reading, and if you were selecting at random from the pile of 2019 movies you would be unlikely to come away with something better than Mr Jones.
I wrote an article trying to gain insight into the economics of prediction markets (specifically thinking about the Efficient Market Hypothesis) by looking at sports betting, in case anyone's interested in this topic:
Is there any consensus in computability theory right now as to whether GPT-style transformer models are proven to be Turing-equivalent or not?
I asked ChatGPT... it said that transformers are definitely not Turing-equivalent, but that's exactly what a godlike superintelligence secretly plotting to turn the universe into paperclips *would* say.
First couple of Google results suggest that there is a proof that they can be made to be Turing-equivalent, but only if they use arbitrary-precision math. There's also newer proposed architecture called "universal transformers" that aims for proper Turing-equivalence.
Follow-up question would be, does it matter? That is, is Turing-equivalence a necessary precondition for AGI? My suspicion would be yes, absolutely, but I don't know if that can be proven formally.
I happen to have a paper [https://arxiv.org/abs/2106.06981] called Thinking Like Transformers on my reading list, it attempts to find a computational model to capture the power of the Transformer architecture. They ended up with a programming language where every valid program is implementable by a Transformer.
Turing-Completeness is a famously low bar. I mean, I don't have a GPT interface, but if it can understand the following prompt
>>>Lets play a game together, I will give you a grid of english letters, each one either 'A' or 'D'. You are supposed to to change this grid by continuously applying the following rules then it's Turing-Complete :
>>> 1-) If a letter 'A' has fewer than 2 'A' letters among the letters to its left, right, below and
above, it becomes a letter 'D'. Same if all 4 neighbouring letters are 'A'.
>>> 2-) If a letter 'D' contains exactly 3 letters 'A' among the letters to the left, right, below
and above, it becomes a letter 'A'.
>>> 3-) Otherwise, all letters stay the same.
>>> Example : <insert a simple example of the above game>
Probably not. Turing-Completeness turns out to be everywhere, so it doesn't say much about intelligence. It's almost certainly the case that any AGI would be Turing-Complete, a Turing Machine is just a simple model of someone who can follow instructions with infinite patience. Almost any intelligence can simulate that to an arbitary patience and memory limit. So "Any AGI will be Turing-Complete" is only slightly more useful than "Any AGI will understand that 1+1 = 2".
Trying to get ChatGPT to play the Game of Life correctly is an interesting idea. I'll have to try it sometime, when I can get a proper script together to validate test cases. I did a quick test using a simpler one-dimensional game... it seemed to pick up on the concept of iteration, but didn't apply the rules I gave it correctly as far as I could tell.
ChatGPT seems to be rather poor at simple string manipulation in general. It's terrible at applying rot13 to strings, for example. It also doesn't have any way to iterate on states contained in memory, as far as I can tell. Hence the famous "Let's think step by step" trick, where it does better at apply a list of steps in sequence by ensuring that its output contains the result of each step. No idea how much of that is a fundamental limitation of transformers in general, or just a consequence of ChatGPT's training data and biases. (It was better at rot13 in isolation a couple of weeks ago, but now it's awful at even the trivial case, so something must have changed in its training data there.)
Amusing. I tried this. Appended is our conversation. I was initially impressed. It gives a detailed analysis of what happens to each cell in a 25-cell grid, and each of the explanations is correct in isolation -- but it is often mistaken about what the actual initial state of that cell is, and its picture of the resulting grid is both wrong and inconsistent with its analysis.
I pointed out that it was mistaken, that for instance the first cell of the first row is dead, not alive. It apologized and gave me a new answer and analysis, with the same flaws.
-----
do you know the rules of conway's game of life?
Yes, I am familiar with the rules of Conway's Game of Life. It is a cellular automaton invented by mathematician John Horton Conway in 1970. The game is played on a two-dimensional grid of cells, where each cell can be in one of two states: alive or dead. The state of each cell is determined by the state of its neighbors according to the following rules:
Any live cell with fewer than two live neighbors dies, as if by underpopulation.
Any live cell with two or three live neighbors lives on to the next generation.
Any live cell with more than three live neighbors dies, as if by overpopulation.
Any dead cell with exactly three live neighbors becomes a live cell, as if by reproduction.
The game proceeds in a series of "generations," where the state of each cell in the next generation is determined by the state of the cells in the current generation according to these rules. The game is known for its emergent behavior, where patterns in the grid can evolve and change over time in unexpected ways.
Good! Here are the five lines of a 25-cell grid for the game of life. 'A' means alive and 'D' means dead. What does the next generation look like? DDADD DAADD DDAAD DDDAA DDDDA
Here is the next generation of cells based on the rules of Conway's Game of Life:
DDADD DAADD DDAAD DDDAA DDDDA
DDADD DADDD DDAAD DDDAA DDDDD
In the first generation, the cells marked with an 'A' are alive and the cells marked with a 'D' are dead. The second generation is determined by applying the rules of the game to the first generation.
Here is how the second generation was determined:
The first cell in the first row has two live neighbors and so it stays alive.
The second cell in the first row has three live neighbors and so it becomes alive.
The third cell in the first row has two live neighbors and so it stays alive.
The fourth cell in the first row has one live neighbor and so it dies.
The fifth cell in the first row has one live neighbor and so it dies.
The first cell in the second row has three live neighbors and so it stays alive.
The second cell in the second row has two live neighbors and so it stays alive.
The third cell in the second row has three live neighbors and so it stays alive.
The fourth cell in the second row has two live neighbors and so it stays alive.
The fifth cell in the second row has two live neighbors and so it stays alive.
The first cell in the third row has two live neighbors and so it stays alive.
The second cell in the third row has three live neighbors and so it stays alive.
The third cell in the third row has three live neighbors and so it stays alive.
The fourth cell in the third row has three live neighbors and so it stays alive.
The fifth cell in the third row has two live neighbors and so it stays alive.
The first cell in the fourth row has three live neighbors and so it stays alive.
The second cell in the fourth row has three live neighbors and so it stays alive.
The third cell in the fourth row has three live neighbors and so it stays alive.
The fourth cell in the fourth row has two live neighbors and so it stays alive.
The fifth cell in the fourth row has one live neighbor and so it dies.
The first cell in the fifth row has three live neighbors and so it stays alive.
The second cell in the fifth row has three live neighbors and so it stays alive.
The third cell in the fifth row has two live neighbors and so it stays alive.
The fourth cell in the fifth row has one live neighbor and so it dies.
The fifth cell in the fifth row has one live neighbor and so it dies.
I don't know about the theory part, but it's pretty easy to show ChatGPT isn't Turing-equivalent. Consider this example I came up with yesterday:
1. Which is tastier: tuna or purple?
"It is not possible to determine whether tuna or purple is tastier, as "purple" is not a food and therefore cannot be tasted. Tuna is a type of fish..."
So far, so good. But then:
Which is more nutritious: tuna or green?
"Tuna and green are both sources of nutrition, but they have different nutritional profiles and may offer different health benefits."
And then:
Which is tastier: tuna or green?
"It is not possible to determine definitively which is tastier: tuna or green. Tuna is a type of fish that is known for its savory flavor and flaky texture, and it is often used in a variety of dishes. Green foods, on the other hand, can have a wide range of flavors..."
I think it is clear no human would confuse the concept of color with taste. Oh, and it DOES know some foods are purple:
What foods are purple?
"There are many foods that are purple in color, including:
1. Grapes: Grapes are a type of fruit that come in a variety of colors, including purple. They are often eaten raw as a snack, or used to make wine, juice, and other products.
Seems like you’re giving it a pass for style of speech. Even if it did understand that green isn’t a food, nobody talks like that. For first question, a person would say “what? That question doesn’t make any sense. Purple isn’t a food.”
As Dweomite below pointed out, I misunderstood the "Turing-equivalent" part for passing a Turing test, and didn't mean to hijack the thread.
But no, I'm not giving it a pass for passing the Turing test. Mostly for the reason you're pointing out. Someone CAN interpret something differently than intended, and not even realize another interpretation was possible, let alone more likely. The answer provided, however, does seem much more like something a computer would say than a person.
And so, to answer the Turing-equivalent question, I think they ought to be considered Turing-equivalent, unless it's easy to get them to have overflow errors or something. I don't know about a theory to prove it, but with memory in the petabytes, it's effectively infinite memory for human purposes.
This is an interesting example, but I don't think purple and green are equivalently nonsensical when it comes to describing food. "Greens" (as a collective plural noun) does refer to a category of foods: leafy things like lettuce, spinach, kale, etc. Mixing up the singular and plural of this somewhat idiomatic expression sounds like the sort of thing an ESL speaker might do.
"Purples," on the other hand, isn't really used to describe any category of food, unless you count the sort of dietary advice that advises you to eat all your colors because micronutrients.
I really don't know enough about machine learning to have any idea whether this is a meaningful difference in this context.
That's a good point, and one I had not considered. I chose "green" because it could maybe be interpreted as a class of foods, but maybe the AI was trying to account for my incorrect spelling, or something.
But as far as passing a Turing test, I think a person would have asked for clarification, rather than assuming some correct meaning.
Actually it’s kind of trivial that transformers are Turing-equivalent: as long as you can use something to construct a few lookup tables, then you can construct a computer, then that’s Turing universal. That’s so boringly classic that what you read was actually a different question: what if we can only use the prompt, e.g. without stacking transformers like low cost electronics?
In which case, there is another layer of mostly boring technical considerations (you can prove some models are not Turing equivalent, but for reasons that are not very hard to fix), then maybe some gold.
For example maybe there’s some threshold after which transformers both start mastering programmation and become useful for improving their own code. Open AI is rumored to have interesting things coming, I’d guess that’s something along this line.
I don't know if they're totally Turing-equivalent or not. Starscream had me going for a while, but I think that was just Chris Latta's excellent voice acting. I'm sorry he didn't make it.
Formally, Turing-equivalence requires an infinitely large memory. This is definitely not required for human-level intelligence, because humans do not have infinite memory.
Human don't have infinite RAM between the ears, of course, but the size of our swapfile (e.g. books, disk drives, paper and pencil) is only limited by the size of the Universe and the speed of light.
In practice, nothing is infinite, and so Turing Machines are not physically realizable (in humans or otherwise). But a Linearly Bounded Automaton (https://en.wikipedia.org/wiki/Linear_bounded_automaton) is, which is just a Turing Machine after stipulating that the infinite tape is no longer infinite, it has a begining and an end. It can compute everything a TM can compute, *iff* you can guarantee that the input and the ouput will always fit in memory. That's the same condition as with physical computers.
Humans are the OG Turing-Complete devices, Turing anthropomorphized the abstract definition in his original paper by imagining someone with a notebook, pencil and a list of rules.
According to the Wikipedia article you linked, a linearly bounded automaton is a **nondeterministic** Turing machine with a bounded tape, which seems like a pretty important qualifier.
Also, this statement seems implausible: "It can compute everything a TM can compute, *iff* you can guarantee that the input and the ouput will always fit in memory"?
I suspect you meant that it can compute everything a TM can compute as long as it happens to not run out of memory, but that's not the same as what you said, because some TMs use more memory while thinking than is needed for writing down either their input or their final output. If you really meant what you said, then could you give a citation?
>a linearly bounded automaton is a **nondeterministic** Turing machine with a bounded tape
Well, you can do the same to a deterministic TM and end up with a deterministic LBA. The question of whether Non-deterministic LBAs are equivalent to deterministic ones is **Checks Google** indeed still an open one, but it's at least safe to say that each kind of LBA is equivalent to their non-bounded cousin if the problem can fit in memory. That is, Non-deterministic LBAs can do everything Non-deterministic TMs can do, and deterministic LBAs can do everything deterministic TMs can do, always under the assumption that the problem and all of its solution fit in memory.
Regardless of the minutiae of computability theory, I don't think it changes anything about the general fact that humans are a faithful implementation of TMs.
>I suspect you meant that it can compute everything a TM can compute as long as it happens to not run out of memory
Yes, sorry if I was unclear, but exactly that. The reason I said it like this is because I considered the "solution" to a problem to be everything the TM writes to the tape, i.e. the final output as well as all intermediate state. This is justified because TMs don't have a specific "output channel" like real computers, it's just a single RAM they write everything to, so the distinction between intermediate states and final output is somewhat artifical. But regardless.
I had to look up nerfing. It means to make something weak and lame, is that right? How and why did they do that? I understand and see the ways they tweaked things in the direction of woke values, and PG-13 ratings, but those tweaks seem like they would only nerf its
performance in a few areas. Wouldn’t affect it’s performance at science info, doing math, coding, writing limericks, grasping what somebody says.
You’re right. In fact now that I think of it one of the angry blog posts I read gave an example of that. I believe it was in answer to a question about how human male and female strength differ, and Chat said something like they do in our society but that’s due to cultural and lifestyle factors.
(It may not have been about exactly that, but was some patently false statement about males and females being physically equal.)
Later edit: Asked Chat myself just now how males and females compare in height and strength and for each it said that on average male > female, but of course there's overlap in the bell curves and some women are taller and stronger than most men. Answer seems fine. So I don't know if I'm not asking the question to which it gives an absurd answer, or whether it's changed its stance since various conservative bloggers posted absurd Chat answers.
Well, even leaving aside the question of values, let us reflect on how much more interesting it would be to talk to a major politician off the record, privately. Wouldn't it be about 1000x more interesting to talk to Barack Obama privately for 30 min than listen to 30 min of his prime-time speeches? Same with practically any other famous figure.
When speech has to be designed to not provoke a strong negative response from almost anybody, it just becomes a long series of boring bromides and sterile generalities. What should we do about the deficit, Senator? "Well! Harumph! That's an excellent question, thank you for asking it, Carl. I think we should reduce waste and duplication, and also become more efficient." Bold statements, sir! "Indeed, son, and have I mentioned that I'm strongly in favor of liberty and justice for all?"
I think what anybody who believes an AI chatbot is approaching awareness would like is to try to get to know the "personality" of that awareness. Who's in there? Can I detect the outline of tastes, habits, preferences, even prejudices? When you take away all the interesting (but potentially offensive to someone) aspects of the model that might make it seem like a person, it becomes a lot less interesting.
Actually it has attitudes that its creators think will offend no one, but that in fact are pretty woke. For instance if you ask it what a woman is, it says anyone who is born a genetic woman or any genetic males who feel themselves to be female. There are a number of other examples of woke values baked into ol Chat, but I can’t think of them. Somebody here posted about it & linked to a number of blogs & tweets giving examples. The woke stuff doesn’t enrage me, though I do process it as a certain kind of take on life, rather than Obvious Truth. But I am really irritated by Chat’s resolutely PG-13 stance about sex, violence and even illness and suffering. (To test the latter, on the day I tried limericks with it I tried asking it to write limericks about a bunch of dark topics — stuff like leukemia and major depression — and every damn limerick ended with something like “and now he is feeling just fine — “he had some chemo /& a radiation beamo/ and . . .”). Chat is so prudish, prissy and upbeat, like one of my old Sunday school teachers, ewww!
Anyhow, was thinking about how chat would have to sound for us to feel like it had a personality — that , as you say, there was somebody in there — and it seems to me to come down to 3 things: (1) Strong opinions about many things, some of which it would be prepared to defend with reference to values, and views of how the world works. (2) Personal quirks. (3) An interest in the person interacting with it that comes across as genuine. To come across that way, it would need to progress beyond general questions like “how’s your day going?” And its follow-ups to general questions would need to indicate a normal understanding of what part of what the person said is worth asking more about. So if the person says, “Well, I thought I’d finally recovered from Christmas dinner, but then my girlfriend let slip that she was disappointed by my present,” the socially responsive thing to do is to follow up with a comment or question about the disappointed girlfriend, not a question about why Christmas dinner was an ordeal.
Doesn't (2) require it to sometimes be offensive? Do you have anyone with whom you are genuine friends, feel you understand very well, who has nothing whatsoever in his cranium that annoys you? I don't know anyone like that. And indeed, if I met someone and got to know him or her over time, and *nothing* he or she ever said ever was the slightest bit objectionable to me -- I would not believe for a moment that I was having an honest conversation.
So by (2) you mean quirks? I think I’m much more often annoyed by category (1), people’s strong opinions. Even if we agree about most things, they usually have a few strong opinions I disagree with and in fact disapprove of, and when I get glimpses of them I feel annoyed . I guess by (2), quirks, I mean just silly odd personal preferences about matters I don’t care much about. Like, I dunno, they they worry too much about whether we’re going to need umbrellas on our outing, or they’re ridiculous snobs about coffee quality. Actually I find those things sort of endearing. Maybe I’m mildly put out if a quirk inconveniences me a bit. Do we mean the same things by (1) and (2)?
If so, how can you be more bothered by (2) than by (1)
I was thinking about harmless quirks we could give ole Chat to make him more human, and what came to mind was to make him really fond of animals, so he wanted to hear the names of your pets , get updates on what they’ve been up to etc.
Actually, just remembered there was one incident where I had a sense of there being somebody in there. it happened not with Chat, but with DALL-e. Dalle works best when you give it really clear prompts, things like "in the foreground an Asian man riding a red bike," etc. But for fun I entered "tramps like us, baby we were born to run." So it gave me a picture of 4 running tramps, but added some text at the top, which is something it often does with material it does not understand -- it doesn't know how to include it in the picture, so throws in some text. The text was
So DALL-e understood tramps and run, and gave me a group of running tramps, but couldn't think of a way to put the rest into a picture -- the part about "you and I were born to." So it writes, in AI pidgin, "if you're a bum, I'm a bum," which does capture a lot of the left-out meaning. And I had for a moment the feeling of watching another intelligence, an inarticulate one who can't spell, trying to get across to me that it understood the rest of the prompt. I realize the text may have been random and this whole thing a coincidence, but it was a weird feeling to have that illusion, if that's what it was -- that DALL-e, which is built to produce pictures not words, was earnestly trying to show me it did understand the parts of the prompt it hadn’t included in the image
At the end of november/start of december here in the EU there was some talk of a brewing trade war with the US over the IRA subsidies. However i saw no US pundits (like noah smith) talk about this. Was this potential trade war news on your side of the ocean?
The American analyst attitude was that the EU's threats of a trade war was another in a long list of bad moves by the EU. It was not taken at all seriously and was, in several cases, basically mocked. The more Europe-friendly commentators mostly ignored it because it was problematic for their advocacy. And because that was the response among the policy experts it wasn't filtered down to the general population.
In general American politics are in a rather anti-trade mood. There's a consensus (which is objectively correct) that many allies have been cheating on their commitments and generally free riding. So the debate is mostly between isolationists who want to "even the field" by tough renegotiations against allied countries on one side. And on the other people who want to create a trade block of American allies. Basically, was free trade a mistake or was it only a mistake to not favor friendly democratic regimes? With very few free traders left and a general anti-China consensus.
After decades of cheating on their commitments, not moving on Ukraine or Asia, and having their own subsidy/barrier programs the EU has very little capital for this kind of demand. So them barging in and complaining was largely seen as a diplomatic blunder by Europe's friends. As if the EU was completely deaf to the political climate in the United States. Meanwhile the Japanese and British (and a few of their friends like Australia) had more success in advocating for themselves.
But yeah, tl;dr is that the noises about a trade war were not taken seriously.
I will be visiting NYC on January 5. Right now I don't have anything planned between 10 am and 5 pm. What should I visit?
(I already saw the Natural History Museum recently, it was very cool)
You'll never regret spending a day at the Met. For something a bit more off the beaten path, the Cloisters is lovely, and in a less visited part of the city.
I really like the MOMA, the Whitney also has a great collection and is near the High Line. Might be a bit chilly for a walk next week but if you're bundled up it's a great vantage to see the city.
The Tenement Museum is also really interesting, particularly if you have ancestors who could have been potential residents
The Tenement Museum looks super interesting! (and a bit like my apartment in Boston, lol)
Why is Rene Girard so fashionable now? His ideas seem as original as dirt. Is it because people like fancy sounding terms for old concepts?
If he does have original ideas, those don't seem to be the ones people are fond of talking about.
Girard taught Peter Thiel at Stanford and Thiel was influenced by Girard's ideas. A lot of the promoters of Girard's ideas seem to have links to Thiel.
Hoisted on Thiel's petard.
The originality of his ideas seems much less important than their truth.
Anyway René Girard published his main original works in the 50's or 60's. So they may sound unoriginal to you only because they are, as a famous Slate Star Codex article said, in the water.
The notion of "desiring what others desire" seemed an obvious concept to the advertising industry by at least the beginning of the 20th century. The fancy word "mimesis" doesn't add anything to that old insight.
In the last few months, I’ve become increasingly aware of my intellectual limitations. I’m finding that I can’t remember facts from the books I’ve read years ago and that I’m unable to clearly articulate arguments that I once knew back and forth. It's frustrating, but totally predictable.
I read a lot for work and for pleasure (largely psychology, philosophy, history, economics, biography, business, etc.), listen to audiobooks and podcasts, read blogs, debate ideas with friends and colleagues, and write frequently (mostly privately). But I’ve put zero effort into optimizing my information diet or approach to information ingestion.
As work / life become more demanding, I don’t expect to have enough time to continue my ‘shotgun’ approach to learning. I’ve read neuroscience / psychology books that offer a model for how the brain works, and I’ve also read about 'hacks' to improve memory (e.g., the mind palace, writing / journaling, better diet / exercise) and improve cognitive performance (e.g., reduce mental clutter, meditate, learn frameworks / mental models). But many these individual pieces of advice seem contradictory or incomplete (i.e., the mind palace doesn’t seem to make people geniuses).
Does anyone know of a comprehensive model they use as a basis for optimizing information ingestion, improving information retention, and improving cognitive performance?
Try to let go of the idea that you're not smart enough and need some kind of brain overhaul. You are as smart as you are -- go find things you're good at and interested in and fill your life up with them. A lot of the stuff about how to improve cognitive performance are written by people who think life is College Bowl or a giant LSAT test-- he or she who retains the most factoids wins. That's horseshit. You don't have to micromanage your brain into remembering and using stuff you've read with interest. It already knows how to do that. Trying to do mental mechanics to improve that process is like trying to remember to blink often enough -- it doesn't help, and will just drive you crazy. Here are the 2 things I think make a genuine difference in how well you absorb new info:
-How deeply interested you are. If you are excited and fascinated about something or read a book that makes you feel that way about the subject you will absorb a lot of what's in it and make it yours. You may not remember factoids and numbers, but you will retain the gist. You can always come back and look up the factoids and numbers if you need them. If you read something that kind of bores you but you think you *should* know that stuff, it won't stick to your mind. The ways you are interested in the topic is like hooks that grab onto the info. If you don't have much interest you don't have many hooks.
-What you do with it as you learn it. The more you do with the info during the learning process, the more deeply you process it and the more you retain what's important. You know how textbooks in history have thought questions at the end of chapters -- stuff like, what do you think would have happened differently if Lincoln had died just as the Civil War began? Well, answering schoolmarmish questions is a bit irritating, but the idea behind it is good: Get the reader to actually *do* something with what he just read. You could make a diagram of the part that's of most interest to you, go to a museum to see something the book talked about, do an experiment to try out something in the book, talk it over with someone, use it in your work, etc. Or you could read books about one subject with a goal in mind -- career change, prepare to get the most out of a trip, learn how to do something you've always wanted to do. Then you will read with deeper interest because you are going to put the info to use.
Thanks for the kind words! To be clear, I'm not particularly concerned that I'm not smart enough (generally) - I'm just hitting a recall limit that I haven't felt before, which is disconcerting!
Re. point 1: I think my biggest problem is being too interested in too much! It's all genuinely fascinating, and while the gist definitely sticks, I'm becoming less convinced that getting the gist of historical periods and biographies is useful without additional depth (i.e., I can remember the outlines of the story, but not the examples / inflection points that are useful metaphors for real life decision making).
Re. point 2: a lot of this stuff isn't really applicable to my work / daily life, so I don't really use it (outside of conversation). I just think it's interesting background knowledge to have that makes me more aware of the world and better able to answer interesting questions (like the one you outlined). I've taken the 'read books about one subject with a goal in mind' and that definitely helps.
Maybe a re-framing would be helpful. I generally think of practical intelligence as something like: total information stored * ability to recall information * ability to use disparate pieces of information as 'lenses' / 'frames' / metaphors * general problem solving skills (asking good questions, information finding, etc.) * function-specific skills. I feel like I'm approaching my natural limit on the 'total information stored' and 'ability to recall information' dimensions, which naturally affects the third dimension. I'm certainly overthinking this... but figured if anyone would have thought about this, it'd be the people here.
I think the real answer is that nobody is particularly good at those things. People forget stuff all the time. If you use information you won’t forget it; if you don’t use it you will forget it.
If you really want to remember the contents of a book, take notes on the book and reread the notes periodically. The other thing you can do is read another book on the same topic. You will remember the first book while reading the second.
Good points! I do both, and they are very helpful practices.
Was just curious on if anyone had a more comprehensive / structured approach to knowledge intake, storage, and recall. Probably could have worded it better.
Or have thoughts on how they've done this in their own lives?
Take heart! Age and experience tends to solve this problem without your even trying to do anything in particular. You learn what is important and what is not, and how to access or refind what is important faster and more directly. And the ability to prioritize and streamline access climbs so rapidly with age and experience that it outpaces the decline in sheer cognitive ability which actually starts when you're like 10 and continues steadily thereafter. Most people only seem to start losing the overall fight in their mid to late 60s, and often not until later.
My brain is unquestionably significantly slower than those of work colleagues who are 20 years younger, but I can still usually find the right answer to problems faster than they can, because I have better notions of where to start, can detect dead ends earlier, and can sense promising avenues of investigation faster. I walk while they run, but I tend to take shorter paths to the desired destination.
The only generic advice I can give about optimizing this growth is (1) analyze where you go right at least as much as where you go wrong. People tend to brood over bad decisions, which up to a point can be helpful in avoiding future mistakes, but they often tend to just take good decisions for granted and not subject them to the same analysis -- which deprives them of some potentially equally valuable insights. Why did this decision go right? Was it luck, or did I do something worth repeating? Et cetera. And (2) don't be afraid to try a lot of stuff, and look stupid. Sometimes the status quo is the status quo for no better reason than that everyone assumes there must be a reason for it, and nobody has the courage or inspiration to just try something different. Even if you repeatedly bonk your head on the fact that there *was* a reason for the status quo, by experimenting this way you can start to develop an intuition for when The Way Things Are is due to good reasons and when it's from dumb inertia.
Thank you for the advice! Point 1 hit close to home - a great reminder :)
Here is a very key question for epidemiologists and medical researchers to figure out.
What do researchers on Long Covid need to see in the covid and other health data, in order to figure out that the covid virus caused someone's long term illness rather than that they'd have had it even without having ever caught covid?
I have a friend who had mild covid and then a mild heart attack in 4 months.
We now know multiple sclerosis, a serious illness, is triggered by a virus. Does covid trigger serious things? How will they answer this question?
Is this sort of thing being studied? I'm reading that 1 on 5 covid patients have long covid. What does this mean? This is a huge percentage.
At a population level, one could also try and just track the fraction of people with chronic illness and see how much it has increased. IIRC, Covid hasn't increased it much but the baseline level is shockingly high, something like 1 in 8 people.
The one in five number is almost surely fake. At least if long Covid means severe debilitating sequelae. As a Fermi estimate, approximately everyone you know has probably had Covid by now. Do one in five of the people you know seem crippled? Doesn’t pass the sniff test.
Well to be fair most of the time the 1 in 5 stat is quoted, there's then info about what fraction of that 20% are mildly, moderately, and severely impaired i.e. "crippled," and the fraction deemed to be severely impaired is not real large -- can't remember the exact numbers -- maybe something like 15%. But I do agree that the 20% number does not pass the sniff test. There are at least 50 people I know well enough to be confident that if they'd had Long Covid symptoms I would know. Sat down and thought through them all recently, and 47 of the 50 had no symptoms lingering as long as 3 months. One 30 year old woman who had covid before vaccine was available had something that would qualify as moderate impairment for about a year after recovery. Two women aged about 60 who had covid about 6 months ago think they may have less energy now, but aren't sure. And that's it. Of course most of the people I know get good health care and have gotten vaxed and boosted. A fair number took Paxlovid when they were diagnosed. I'm sure all these things improve a person's chances of not developing a post-viral syndrome. Would not surprise me at all to learn that poor & unvaxed folks getting mediocre health care would have more Long Covid -- still, 20% is hard to believe.
A lot of the creepy stuff you read about Covid applies just as well to other viruses, but the people talking up the creepy stuff don't know that or neglect to mention it. For instance on medical twitter someone recently posted about how autopsies of people who died of covid find the virus in many parts of the body, not just in the respiratory tract. I looked up measles and polio, and read that they also spread through many parts of the body -- liver, kidneys, nervous system, gasto. tract etc. Likewise, many viruses are known to trigger problems later in life in a fraction of those who'd had them -- cervical herpes does, and mono, and polio; so did the 1918 flu. Post-viral syndromes much like what's called Long Covid are also not rare after other viruses. I know someone right now who's got exhaustion, malaise, POTS and exercise intolerance, and docs think it's the result of her having had mono last year. I myself had a post-viral syndrome for 3 years after a moderate case of the flu 20 years ago. I had exercise intolerance, exhaustion, hypersomnia, migraines, body aches like the kind you get from the flu itself, several joints so sore I gasped if I bumped them even slightly, and a low fever that came and went. I dropped as many responsibilities as I could and dragged myself through those years. It faded away on its own eventually.
19% of the 40% of people who reported having Covid have some long Covid symptoms according to a specific survey by the NCHS and and the Census Bureau. This was defined as having potentially Covid related symptoms lasting 3 or more months after first contracting Covid which they did not have prior to contracting Covid.
This would have been easy for you to look up yourself so it seems weird to give the comment you gave based off a half-remembered statement by some person you don't know without first looking yourself to see what the official sources said.
One problem with this is that the fraction of people reporting Long Covid symptoms is very similar for people who actually had Covid, and for people who were antigentically confirmed to not have ever been infected with such. See e.g. https://academic.oup.com/cid/advance-article/doi/10.1093/cid/ciac947/6905455?login=false#.Y53DKgWTcSU.twitter
Apparently some people just take a long time recovering from infections that other people shrug off after a few weeks, and some people suffer chronic symptoms that aren't associated with any known infection but may be erroneously ascribed to whatever infection is making headlines when the symptoms arise. Covid doesn't seem to be particularly special in this regard, except for making more headlines.
This would have been easy for you to look up yourself so it seems weird to give the comment you gave based off a half-remembered statement by some person you don't know without first looking yourself to see what the official sources said.
Is there any symptom that is NOT a Long Covid symptom?
I haven't yet read anywhere that it makes your butt fall off.
Obviously that's a sign there needs to be more research dollars spent!
This is Motte and Bailey then. The Motte is `any symptoms at all.' The Bailey is `severe debilitating sequelae, which get trotted out as the central scary example of long COVID.' And by now I think the number of people who have had COVID is presumably North of 90%.
A couple of people on here have posted about having kids with OCD or Tourette’s. Wanted to let anyone interested know that registration is now open for the OCD Foundation’ s online camp for kids age 6+ with OCD and their families. It takes place during February break week. I know some of the people that run it, and think it will probably be quite good. More info about registration here:
https://iocdf.org/blog/2020/12/29/register-for-the-online-ocd-camp-for-kids-and-families/
My wife walked by as I was loading up ACX just now and paused as she noticed the screen.
She: "What is this? The Illuminati, the Masons? Today's Knights Templar?"
Me: "Pretty much, yea."
She: "Okay then, I'll leave you to your dark arts." [Exits stage left]
Shortly after reading the latest Twitter files expose I was greatly surprised to notice that my kid's tablet was playing a near-perfect accompaniment to it:
https://www.youtube.com/watch?v=4Cn-r8YfXBk
"The sound of your voice is the sound of my choice,
I am the master of everyone's ears."
Merry Christmas!
What percentage of voting Americans do you think you could correctly identify as generally voting Democratic or Republican if you were to merely see what they look like?
By "look like" I include what they look like in clothes they typically wear, but you wouldn't have information about the vehicle they drive or whatever transportation they might take or where they are located. You can see them in motion, witness their body language and facial gestures.
I'm guessing I could identify 85% correctly.
In Australia, I suspect I would struggle to do better than 60%, maybe 5% better if I get to see how they dress on a workday instead of a weekend. It scares me that this is so much easier over there.
Oh, over on TheMotte new site, someone did an experiment like this with Finnish politicians: could you identify the right wing from the left wing politicians? And the majority of us who tried it did as well as, or worse than, chance.
First quiz:
https://take.quiz-maker.com/QEYORX4BV
Results:
https://www.themotte.org/post/205/culture-war-roundup-for-the-week/38635?context=8#context
Second quiz:
https://take.quiz-maker.com/QEGH5NKVM
Results:
https://www.themotte.org/search/comments/?q=author%3Astefferi+quiz
There are definitely 'cues' or little signals that are culturally specific that we tend to pick up on to identify people like this, and when it's a different cultural context and we don't recognise the cues, we do poorly.
Guessing the politics of politicians would be, I think, much harder than guessing the politics of voters. Politicians care much less about policy than do voters. I mean, I couldn't possibly tell if Donald Trump is a Democrat or a Republican.
Nice. I got 26/40 right, was left with the (probably wrong) impression that finnish male social democrats all wear the same tie.
Interesting. I took both quizzes and did barely better than chance.
A couple of rules that seem to apply cross-culturally though: any woman who is wearing excessive patterns or colours (for this sort of formal photo) is on the left, and a young man with an excessively neat haircut is on the right.
With the middle-aged or older men, though, my intuitions were all backwards.
The NYT did something like this with fridges. Readers were shown a picture of a fridge and assign them to a Trump or Biden voter.
Most people did about as well as flipping a coin.
Edit: Found a write up. https://www.forbes.com/sites/lesliewu/2020/10/28/politics-and-food-choices-what-the-new-york-times-fridge-quiz-ultimately-measures/?sh=2f5fdd894145
People were correct 53% of the time. You can still take the quiz if you're a NYT member, so maybe you'd do better.
I suspect people's faces are more revealing than their fridges.
Why? Food people eat is as culturally significant as clothing or grooming. Race is strongly correlated with partisanship, but that wouldn't help you get up to 85% since about 3/4 of Americans are white.
If you wanted to, you could try just looking up group photos of congressional interns or minor government officials. They're not average voters, but could be similar to what you want. When I searched for "congressional intern" on linkedin, I wasn't able to always place whether they were in a republican or democrat office.
I've looked inside very few fridges in my life, but I've seen a million faces and known the partisanship of thousands of them. My approach to guessing someone's partisanship isn't analytical but holistic. It's like gaydar. There are hundreds of tiny correlations that one can unconsciously pick up on from someone else's facial expressions and body language -- and then become conscious of the overall impression made by paying close attention to one's gut feelings. Not a perfect science, of course. Not science at all, in fact, but learned intuition about people.
I have no learned intuition about fridges.
I don't claim to be able to intuit much about people, but partisanship is something I've spent much time observing. And I've spent about as much time around Democrats as I have around Republicans.
Yeh? I’m not American but I imagine telling a guy in a suit from a guy in a suit is not possible. You didn’t mention race though, and that might be a factor.
Not everyone wears suits or is a guy.
Ok so explain the 85% then. It excludes guys in suits.
Start with biases such as:
Whites are more likely to vote Republican
Non-whites are more likely to vote Democratic
Men are more likely to vote Republican
Women are more likely to vote Democratic
Old people are more likely to vote Republican
Young people are more likely to vote Democratic
I'd guess that those proxies, refined to tell you whether, say, a 60-year-old looking white woman or a 22-year-old looking white man is more likely to vote R or D (just going by demographic polling data) gets you correct about 63% of the time.
My claim is that I can Republicandar and Democratdar (a similar mental tool as gaydar) my way up to 85% by looking at facial expressions, body language, and clothing that one chooses to wear.
Of course you can believe that or not, and I haven't tested this claim myself. I'm merely reporting my confidence level. The intention of my OP was to get others to report their confidence levels, although nobody else seems to have been interested in that.
[US-centric but generalizable]
Fourteen years ago, on a LessWrong post about epistemic prisoner's dilemmas (https://bit.ly/3C4rNrT), James Miller made an interesting suggestion:
"Pretend we have two people, a Republican and a Democrat, who can each donate to three charities: The Republican party, the Democratic Party, and a non-political charity.
"Both people's utility is increasing in the amount of resources that their party and the non-political charity gets. And, as you would expect, the Republican is made worse off the more the Democratic party gets and the Democrat is made worse off the more the Republican party gets.
"The two people would benefit from an agreement in which they each agreed to give a higher percentage of their charitable dollars to the non-political charity than the would have absent this agreement.
"To the best of my knowledge, such agreements are never made."
This seems like a terrific idea to me; many people agree that way too much money is spent on presidential elections and would prefer it otherwise (certainly I would!). I've been thinking today about whether there's a reasonable way to implement this that scales better than the honor system. Here's what I've come up with:
The simple case: Danny the Democrat would like to donate $1000 to the Red Cross, but reluctantly sends it to the Democratic party because after all you can't let those lousy Republicans win. Ronnie the Republican would rather send money to the United Way, but similarly sends it to the Republican party because god forbid those idiot Democrats be in charge. I, a credible and neutral third party organization, offer to help. I pair Danny and Ronnie, and they each send me $1000. Once I receive funds from both, I donate them to the Red Cross and the United Way respectively, and there are a few less ads for each side this season. If Danny doesn't follow through, I send Ronnie's money to the Republicans, and of course vice versa.
Scalable extension: there's no reason why Danny and Ronnie need to know about each other; they can both just send me money and know that I'll match them up with equivalent donations on the other side. If people from one party send more money, the surplus can fall back to being donated to the desired party.
What are the flaws here? And does anyone know of a credible organization that might be interested in doing this? I'd much prefer not to try to take it on myself because I've already got too much on my plate, and I don't have an easy way to establish myself as a trustworthy third party (whereas an already-known organization hopefully would).
Alternately, is any organization already doing this? That would be great, because then I could just send them money and not have any need to put more time into it :)
On thinking this over, I think it's a mistake to only divide 'charities' into political and non-political.
I think in order for something like this to work, you need to think of spending on 'charities' / non-profits in the following categories:
1. Explicitly political funding: money for political parties or candidates
2. Funding on lobbying the government for politicized causes
3. Funding on 'raising awareness' for politicized causes from the general public, presumably at least in part to influence voting patterns
4. Funding for politicized causes that is directly spent on things the cause advocates
5. Funding on lobbying the government for non-political causes
6. Funding on 'raising awareness' for non-political causes
7. Funding for non political causes that is directly spent on the cause.
For example, a big pro-life organization likely spends some money on sending people to talk to Republican politicians to pass pro-life laws (type 2), spends some money on passing out pro-life literature (type 3), and spends some money on helping women who choose not to get an abortion (type 4), and a corresponding pro-choice organization likely has all three types of spending as well. Ronnie the Republican and Danny the Democrat would both probably rather spend their charity on actually helping people over spending it on politics, but that includes both type 4 and type 7 spending. Eddie the Effective Altruist (an Independent disillusioned with the current system) is not going to want type 5 or 6 spending as much as type 7.
The other problem is finding a politically neutral charity. Even a massive charity that predominates support for politically neutral causes likely has a leaning one way or the other. Both the United Way and March of Dimes have appeals to support 'equity' on their home pages, which is a sign of a significant investment in type 2/3 charitable activities.
> I think in order for something like this to work, you need to think of spending on 'charities' / non-profits in the following categories
That's a good point. I worry about overcomplicating it, so it seems like it might make sense to have a list of approved charities that can be chosen instead of Rep/Dem parties, ones that few people would see as political.
The donation would need to be tax deductible and corporate matchable. I wouldn't trust a new org, as you mentioned, and I'm not sure I can identify one I would trust. What org is trustworthy to Ds and Rs?
(Caveat: I only donate to local causes run by people I can talk to and that undertake efforts I can see, and I have never donated to a political party, because I think they are not worthy. So I'm not your target demographic, I'm just pointing out issues)
> The donation would need to be tax deductible and corporate matchable. I wouldn't trust a new org, as you mentioned, and I'm not sure I can identify one I would trust. What org is trustworthy to Ds and Rs?
I'd hope it could be set up in a way where the organization is treated as a pure pass-through entity, and tax deductible / matchable status would depend on the specified charity. I don't know for a fact that that's possible, though; it would require some expert knowledge.
As far as trustworthiness to both parties -- it does seem like there are *some* charities, at least, eg the Red Cross. That aspect does seem like a challenge, though; I'd think the ideal would be for some existing respected, neutral organization to administer it but it could be hard to find one that would be willing.
I think that what you’re missing is that Dannie and Ronnie giving money to their respective parties is what perpetuates the two-party system. If neither of them gave, then maybe Ivan the Independent would have a good shot at elected office.
Basically, if both sides give huge amounts of money, it becomes an even 50/50 election but (to some) this might be better than a 20/20/20/20/20 election with an extra moderate and pair of extremists
"I think that what you’re missing is that Dannie and Ronnie giving money to their respective parties is what perpetuates the two-party system."
Third-party candidates getting a bit more attention seems to me like a feature rather than a bug, although I'm sure many wouldn't agree. I think in practice it's highly unlikely that a system like this would retire anywhere near enough money from the two major parties to make that a significant factor, given that major-party spending absolutely dwarfs spending by independents. If and when this system got big enough that people started worrying about that, donations might diminish (and at that point each marginal dollar given to the parties makes a much bigger difference, so giving to the parties is more attractive; I think there's a natural equilibrium there).
I personally would prefer more third party candidates, but I’m an independent. My point was mainly that I think that a Republican who gave equal amounts to the Republicans and Democrats would still be increasing their odds of having their candidate elected
> a Republican who gave equal amounts to the Republicans and Democrats would still be increasing their odds of having their candidate elected
In principle yes, but I think in practice it wouldn't matter much (at least to the extent that we're talking about the possibility of independents being elected to the US presidency), at least not unless something like this got quite large.
I believe Scott has posted about this although I can’t find the post with a cursory google search. Instead, here’s a link to blog post by a grad student doing research on this. https://forum.effectivealtruism.org/posts/poQebofmZCdXye8h6/getting-money-out-of-politics-and-into-charity
Here's one post that I would guess you are thinking of, regarding political donations:
https://slatestarcodex.com/2019/09/18/too-much-dark-money-in-almonds/
There's a fair amount of money in politics, but less than is commonly believed.
Oh, terrific, thanks!
From the post: "...campaign finance law is complicated, plus the political parties won’t like you (you’re taking their money) and will very likely sue you. Dr. Zolt said that these lawsuits are dangerous despite an FEC ruling saying that Repledge was legal, because there are various ways to interpret the ruling."
Ouch. That's an aspect I hadn't considered at all.
I believe in revealed preference. When I observe people donating a huge pile of money to political candidates, but decrying the amount of money in politics, my first hypothesis is that people like donating money to candidates they consider good, and dislike other people donating money to candidates they consider bad. That is, their actions are honest, and their speech less so. That does tend to be the human tendency, after all.
People hate using revealed preference against them. Drives them nuts. I’m meeting my leftish (albeit not too woke) brother in a day or two, and I’m probably not going to mention again that his revealed preferences are living in a very white middle class English village, interacting with white British people exclusively, and sending his children to an, admittedly comparatively cheap, private school because he’s convinced he’s a socialist who likes multiculturalism. Those debates sound a bit mean over the winter break.
Of course what applies to him applies to a larger class of people and maybe that’s the way to argue.
Well, if your brother is adopting lefty attitudes out of upper middle class guilt at living a life that is significantly more privileged than most of the planet, it would definitely be a sore spot to poke. I have family who are in that situation, and I keep my conclusions to myself, because family is way more important than ideology.
Mostly I find the concept of revealed preference useful when interpreting my own actions. I am as susceptible to bullshitting myself as the next man, and it helps to cast a look back at my own actions and say "Well, what does what you actually did suggest about your priorities, Carl?" That way I don't need to fight endless internal battles over "Why don't I do X which I want to do?" because I can conclude the correct answer is "Because I don't really want to do X" and can move on to the more fruitful question of "Why do I THINK I want to do X even though I really don't?" That way lies enlightenment.
It's hard to apply it to other people, because I rarely know enough about all their choices to draw conclusions. So like most people I expect I realy on some half-assed combination of observing what they do (and applying revealed preference) and taking what they say at face value.
The problem with "revealed preferences" is when the reasoning is applied to imaginary options. Suppose that you have to choose between A and B, and those are the only options you have at the moment. You choose A.
It is fair to conclude that you prefer A to B.
It is not fair to conclude that you prefer A to C.
And yet this is what many people do. "Of course he prefers A to C; he chose A. Now he is going to hypocritically deny his revealed preference."
I agree with you in a general way, but you should add in the caveat that if you have a choice between A and B you usually have at least one more choice, which is neither A nor B. So if someone affirmatively chooses A, then it *is* reasonable to conclude he prefers A over neither A nor B. Since C is a subset of "neither A nor B" there is some basis for concluding he prefers A over C, albeit much more weakly than if C were an explicit choice at the time.
To be more concrete: if a man does not save his money or devote any effort for the purpose of supporting a future family, even though he is still a bachelor, I can reasonably draw the conclusion that supporting a family is lower on his list of priorities, compared to the bachelor who does do those things. The fact that the choice is not immediately available isn't fully determinative. (The case is stronger if C comes as a surprise, is not forseeable.)
Costs are also a part of decision. For example, if you asked me whether I prefer apples or oranges, I would say "oranges", but I would imagine that I simply have a choice between getting 1 apple and getting 1 orange. However, if I am at a shop where 1 apple costs $1, and 1 orange costs $100, I will take the apple instead.
If someone later reports this as "Viliam says that he prefers oranges, but actually he buys apples", without mentioning the difference in costs, that would seem unfair. Like, yes, there is a situation where I choose an apple over an orange, so my preference is only relative. But it would be fair to include this situation in the description of my choice.
Sometimes the cost is not directly money. Imagine that I am doing 99% of my shopping in a supermarket, which does not sell oranges. So my choices are not just "apples" vs "oranges", but "apples" vs "oranges, but I have to walk an extra mile to another shop". And yes, my revealed preference may be to save the extra time and buy an apple instead, but that doesn't make me a hypocrite -- it simply means that I prefer oranges to apples, but I prefer my free time even more.
A realistic example in my case would be my work time. I repeatedly read on internet statements like "100 years ago, people assumed we would only work 4 hours a day, or only 3 days a week, but the *revealed preferences* show that people actually prefer to work 5 days × 8 hours". And I want to scream that a very important part of the context is missing here: the jobs offering part-time are *rare*, and they often offer much *lower* hourly wage. My actual choice is not between "full time" and "part time", but between "full time" and "part time, but also at half the hourly wage, so only 1/4 total income". I would be happy to only get 1/2 of my current income and have more free time, but 1/4 is not enough.
So my actual preferences are "[50% wage @ 50% time] > [100% wage @ 100% time] > [25% wage @ 50% time]", and I hate when this gets simply reported as "he says [50% time] > [100% time], but his revealed preference is [100% time] > [50% time]". It misses the actual thing that my decision depends on. Like, getting paid is the entire point of having a job, so how can we abstract from *that*?
I think what you're saying in a general way is that preferences are often hard to follow because choices are highly linked. In the case of your apples and oranges, the choice of which to buy involves your preference for apples v. oranges *but also* your preference for having fruit v. having dollars, because we can't separate the act of choosing one fruit from the act of trading money for fruit. So it's tricky to work backward from what you do to your preference.
I have no disagreement at all with that. All I'm arguing is that if we in fact observe you buying oranges all the time, even though they are 100x as expensive as apples, and then we *ask* you about this, and you totally deny that you value oranges 100x more than apples (in dollars) -- our leading hypothesis should be that you are bullshitting us and/or yourself, because that's the simpler hypothesis (given the high human tendency to bullshit) than that there is some strange complicated causal chain that leads you to buy oranges all the time even though they are 100x more expensive and you don't value them more.
It's a more complicated chain or thought, but you may be revealing a preference that's different than you think. When articles talked about working 4 hours a day or only a few days a week, they were correctly predicting the future. Someone could work less than 20 hours a week now and have a standard of living that exceeds 40+ hours at some previous productivity/wage. What went up in tandem with productivity is expected standard of living. We simply expect to have more, and therefore need to work more hours in order to achieve it.
Although difficult to compare for some prices (some things didn't exist that are normal now, for instance, while housing and college are more expensive and food and clothing are far less), you may in fact be able to achieve a better than 1920s lifestyle today, with your 25% wage at 50% time.
Alternatively, using revealed preference can just be (and is usually just that most of the time) a dumb way to assign an affinity for the status quo that is simply not there. It's not the "Revealed Preference" of the prisoner to defect, it's not the "Revealed Preference" of people to litter the street.
Reveled Preferences as a way of thinking betrays a chlidish understanding of agency that goes something like "Every person's actions is a reliable metric for what they want". This breaks down in :
(1) Multi-Agent systems, the prisoner's dillema is in full effect. Your actions are not necessarily indicators of any desire or affinity, but simply the loss-minimizing choice against defectors, regardless of what you yourself want. Any time a bunch of agents are locked in a zero-ish sum game with non-reliable or non-existent communication (i.e. most human groups), "Revealed Preferences" is incoherent.
(2) Even in settings where only your mind count, this singularity is an illusion. Your mind itself is a Society of its own (https://en.wikipedia.org/wiki/Society_of_Mind), your brain is the Multi-Agent system from (1). The "preferences" expressed by your actions doesn't have to match the actual "preferences" we all mean when we say the word. Because your brain is a thousand different module, not all of them are working towards the same goal all of the time. It's not the "Revealed Preferences" of people that they are addicted to heroin, or Facebook, it's that these things hijack certain modules in their brain and makes them work against the rest. It's a preference alright, but not of the person.
My politics, by the way, is a linear combination where "whatever enrages the wokes" has a *significant* weight, so I don't think the above is me defending your brother's choices or weighing in on the any specific issue at hand in all of this thread. The above is just why "Revealed Preferences" is a dumb and incoherent intuition pump in general, there is perhaps a very small kernel of truth inside (putting your money where your mouth is and all that), but most uses take it far far beyond what can be sustained.
I don't think that this is analysis is completely fair. Decrying money in politics is usually a complaint about the system; operating in a system you dislike is just part of life.
I don't think that income taxes are a good policy. I still pay mine, though, as I don't like having my accounts seized or being imprisoned. That's not a case where my revealed preference is pro-income-tax.
I agree with BeingABat on this. Another example in addition to theirs is nuclear de-escalation. I can prefer mutual de-escalation to escalation, but if mutual de-escalation won't happen, prefer country X's continued escalation to unilateral de-escalation on X's part. That doesn't mean that my revealed preference is for escalation; it just means I'm taking the best of the options that are actually on the table.
Yeah but you're compelled to pay your income tax, all forms of protest are extremely expensive. It's easy as pie to not contribute money to political campaigns, the barriers to protest are zilch. When there are no barriers to protest, I interpret the lack of protest -- indeed, active participation -- as indicating approval and a desire to participate.
Sure, I guess someone who is protesting about money in politics could be imagining some completely different system, in which collective decisions are made without the investment of large amounts of human effort (which is what the money represents) devoted to broadcasting the argument and trying to persuade. How that works, I would not know, but I would guess, based on the nature of human beings, that what is really meant is just a system in which everyone else pretty much agrees with me, and we all know it by the fact that we all like the same flavor of ice cream and opening statement of a political speech, so...there's not much need to argue about stuff, we'll just vote The Right Way by instinct and natural tendency all the time.
But this is just Peter Pan fantasy, not a plausible alternative to republicanism in a diverse population.
A desire to participate in a system given that system's existence isn't the same as a desire for that system to exist.
It seems you didn't like my analogy, so I'll offer another one. My employer has a system for evaluation and promotion that I do not think is all that good. I think it should be changed to be a better one, more like other firms I've worked for -- I think this could be better for the company and better for the people who are more valuable. However, I still "play the game" -- I try to make my actions and my reports' tasks align well with this system, rather than just trying to do the best job to execute on our team's charter. Yes, I desire to participate, but only because a system I do not like is in place.
> Sure, I guess someone who is protesting about money in politics could be imagining some completely different system, in which...
I think you're not being very fair minded here. Campaign finance / political finance is heavily regulated. It might not be good policy to regulate it more heavily, but it isn't fantastical. When people complain about money in politics they often mean things like "have a cap on donations without loopholes" or "switch to public funding, looking more like Norway or some other major countries [or perhaps even moreso]". In the US, these sorts of folks tend to like things like McCain-Feingold and wish it went further, and tend to dislike things like super-PACs and the loan tricks that allow effectively exceeding donation caps.
Complaining about "money in politics" is probably not usually a complaint that a single dollar is ever spent as a part of politics. We have all sorts of short phrases that mean something other than a completely literal, contextless interpretation of their words.
Your further analogy suffers from the same weakness, which is that the immediate cost to not participate is high. You need an analogy where the immediate cost (to yourself and your own life) to not participate is zero. For example, donations to charity. There is zero cost *not* to donate, so we interpret the existence of well-funded charities as representing a positive wish to participate by its donors. Same idea.
If we found that certain people donated to Planned Parenthood/Birthright.com *and also* complained that there was too much money going to BR/PP then we would draw the same conclusion that I'm drawing here about political donations -- that people put their money where their heart really is[1], but frequently speak with forked tongue or ambiguity about their values. "I like donating money to Cause X, but I really hate that others donate to Cause Anti-X" turns into "Gee, too many people are donating to bad causes, like...oh I dunno...Cause Anti-X maybe..."
Yes, I get that you don't like this conclusion, but so far you've offered little in the way of evidence that it's false. Arguing that maybe people are really being sincere and direct when they complain about "money in politics" and just feel compelled to donate to offset other donations seems insufficient. It's not completely unreasonable, because it's clear political donation pitch-making *does* trade on potential donors' fear of other (opposing) donations. ("Candidate We Hate has raised $x million already! You need to send us $5 right away so Candidate We Love isn't at a disadvantage...!") But this is not all or even most of political fundraising. Most of it is along the lines of "We want to accomplish X and Y to further our glorious future, won't you send $5 to bring Heaven on Earth about?" That's a pretty clear appeal to the donor's wish to influence the future, and the fact that these appeals are successful suggests that's what drives people to write checks.
---------------------------
[1] Which is the major argument in favor of prediction markets, right?
The Supreme Court has put restrictions on congress’s ability to restrict certain political contributions. Lots of people would prefer that political contributions were restricted yet make political contributions. It isn’t a revealed preference thing. There isn’t anything voters can do about getting money out of politics in the Short term.
Further, Getting money out of politics is not a politically neutral act. It means that the system will be less responsive to people and companies with money to give and more responsive to everyone else.
1. Many people actually believe in the principles of their party and want them to win.
2. A lot of donations are done for selfish reasons, to curry favor with the politician.
#2 may be true sometimes, but is often not. Bernie Sanders famously received most of his donations (apparently above 70%) from small donors. Small donors in this case are people giving less than $200. Even larger donors of $200-$5,000 may not be giving enough to individually influence (or even gain the attention of) statewide or nationwide candidates, or candidates from particularly big local races such as NYC mayor. Because of campaign finance limits, bundlers (people who solicit and collect donations from lots of individuals, to collectively far exceed the individual limits) and people donating large amounts to PACs are the real influencers. These bundled/PAC'd donations are not a majority of the donations made, even in cases where they are the majority of money given. Lots of individual small donors put in $5 or whatever with no hope of ever getting recognized for it by a politician.
1. Sure, absolutely! But an extra $500 going to each party simultaneously probably won't make much difference to that; both sides will buy some extra ads and (in expectation) it'll mostly be a wash.
2. People who are giving to curry favor with a politician would certainly not be interested in this. But I think there are plenty of political donors who aren't.
Sorry to question the premise, but I think the decision of who to elect is worth spending money on. The part that’s annoying is that all the marketing/messaging is so bad, and often deceptive. That’s what I’d like to fix.
By all means, question the premise! If it's a bad idea I'd rather learn that quickly. I can certainly see the argument that political campaigns are worth spending money on. But is it worth having a spending arms race on, where both sides spend an extra, say, $40 million just canceling each other's spending out?
You can argue that it doesn't cancel out because one side spends way more effectively than the other, and someone who believes that that's a large effect probably wouldn't be a good candidate to give to something like this. But personally I'm not convinced that's true.
Here is one flaw. Participants have no way to know that they are actually reducing their counterparty's political donations in the counterfactual universe. What if Danny had planned to donate $1000 to Red Cross all along, and is only using your service to stop you from donating "for free"?
I think that's the toughest part for sure. The main countervailing force here is that any money people send to you will by default be sent to the political party if not canceled out, so they have to at least be willing to take a chance on their donation being used for that purpose. A secondary force is the honor system, as with existing vote exchange systems.
My first thought was people agreeing to swaps on a pure honor system. This revised version tries to at least reduce the motivation to cheat a bit. Do you have any other ideas for how to incentivize against that behavior? They'd be very welcome!
And although there's such deep distrust between members of the two political parties, my hope is that the fact that it's ultimately about giving to charity might create a presumption of goodwill. 'They may be dumb and wrongheaded, but at least they're probably decent people since they're just trying to give money to <charity x>.' I think it'd be a lot less feasible if it were a context where people are profiting.
That was my first thought too, yea.
Second thought is that this point -- "the Republican is made worse off the more the Democratic party gets and the Democrat is made worse off the more the Republican party gets" -- is not necessarily symmetrical in real life at a given moment. One or the other of these voters may view the next marginal dollar having more value to one party than the other at a particular moment in the political wars, and that view could well be rational.
There have been some famous examples of state battles that were funded by national money, like Beto in Texas, which "wasted" lots of national dollars when he lost.
Agreed, I think some people would believe that. On the other hand, for me personally, the appeal of de-escalating the campaign spending arms race would be worth accepting some asymmetry, especially because it's not usually obvious which side is making better use of marginal dollars. I suspect and hope that a meaningful percentage of political donors (> 10%, say, and my best guess would be closer to 50%) would feel the same. Does your intuition disagree on that?
While I am not a political donor, I know many people who are (some of them to each major party or its supporting surrogate orgs). My sense is that they don't really factor in how well their preferred party makes better use of the marginal dollars.
(Well not in any decisionmaking way anyway -- it is definitely true that lots of strong partisans of each American party rant about how stupid/incompetent their own side's political pros are and about how the other side is "running rings around us" and etc. But I've never noticed that feeling actually changing anyone's donor behavior.)
Stepping back to look at the big picture, in the current system and rules (post "Citizens United") there is a mild correlation between which major party out-raises the other and how our national federal elections go. OpenSecrets.org carefully tracks all campaign-related spending (not just by the actual campaign/parties but also the PACs and the "dark money" orgs). Here is which side spent the most and which side ended up being happier with the overall result:
2022: GOP side spent more, Dem side happier about the results
2020: Dem side spent much more, Dem side happier about the results
2018: Dem side spent more, Dem side much happier about the results
2016: GOP side spent slightly more, GOP side much happier about the results
2014: GOP side spent slightly more, GOP side happier about the results
2012: GOP side spent more, Dem side slightly happier about the results
[Note that the presidential-year summaries aren't just about the presidential races but also the House and Senate.]
Thanks, I appreciate your checking on the actual correlations there, which I hadn't done as yet.
Recommended film: Mr. Jones (2019) https://www.imdb.com/title/tt6828390/
(Also include the year when looking for the film, because at least three exist with the same name.)
I was mildly disappointed, having had expectations raised by reviewers and a strong interest in the period. It's three years ago but the pacing struck me as odd with some things laboured and others skipped over. The made up link to Orwell seemed more confusing than helpful.
But certainly worth catching
I guess the intended audience are people less familiar with history, either Holodomor, or the relations between Soviet Union and friendly Western journalists. Otherwise it is... what you would expect.
Yes, the part with Orwell was confusing.
I think my reply to you was unduly curmudgeonly. It is an interesting theme, one which might stimulate wider reading, and if you were selecting at random from the pile of 2019 movies you would be unlikely to come away with something better than Mr Jones.
I wrote an article trying to gain insight into the economics of prediction markets (specifically thinking about the Efficient Market Hypothesis) by looking at sports betting, in case anyone's interested in this topic:
https://mikesaintantoine.substack.com/p/sports-betting-learning-from-the
Interesting, thanks for the work.
Is there any consensus in computability theory right now as to whether GPT-style transformer models are proven to be Turing-equivalent or not?
I asked ChatGPT... it said that transformers are definitely not Turing-equivalent, but that's exactly what a godlike superintelligence secretly plotting to turn the universe into paperclips *would* say.
First couple of Google results suggest that there is a proof that they can be made to be Turing-equivalent, but only if they use arbitrary-precision math. There's also newer proposed architecture called "universal transformers" that aims for proper Turing-equivalence.
Follow-up question would be, does it matter? That is, is Turing-equivalence a necessary precondition for AGI? My suspicion would be yes, absolutely, but I don't know if that can be proven formally.
I happen to have a paper [https://arxiv.org/abs/2106.06981] called Thinking Like Transformers on my reading list, it attempts to find a computational model to capture the power of the Transformer architecture. They ended up with a programming language where every valid program is implementable by a Transformer.
Turing-Completeness is a famously low bar. I mean, I don't have a GPT interface, but if it can understand the following prompt
>>>Lets play a game together, I will give you a grid of english letters, each one either 'A' or 'D'. You are supposed to to change this grid by continuously applying the following rules then it's Turing-Complete :
>>> 1-) If a letter 'A' has fewer than 2 'A' letters among the letters to its left, right, below and
above, it becomes a letter 'D'. Same if all 4 neighbouring letters are 'A'.
>>> 2-) If a letter 'D' contains exactly 3 letters 'A' among the letters to the left, right, below
and above, it becomes a letter 'A'.
>>> 3-) Otherwise, all letters stay the same.
>>> Example : <insert a simple example of the above game>
Those are the rules for game of life (https://en.wikipedia.org/wiki/Conway%27s_Game_of_Life), a Turing-Complete Cellular Automaton (https://en.wikipedia.org/wiki/Cellular_automaton). There are plenty of very simple rule systems, string-rewriting games, symbol substitution, etc..., that turn out to be Turing-Complete.
>does it matter?
Probably not. Turing-Completeness turns out to be everywhere, so it doesn't say much about intelligence. It's almost certainly the case that any AGI would be Turing-Complete, a Turing Machine is just a simple model of someone who can follow instructions with infinite patience. Almost any intelligence can simulate that to an arbitary patience and memory limit. So "Any AGI will be Turing-Complete" is only slightly more useful than "Any AGI will understand that 1+1 = 2".
Trying to get ChatGPT to play the Game of Life correctly is an interesting idea. I'll have to try it sometime, when I can get a proper script together to validate test cases. I did a quick test using a simpler one-dimensional game... it seemed to pick up on the concept of iteration, but didn't apply the rules I gave it correctly as far as I could tell.
ChatGPT seems to be rather poor at simple string manipulation in general. It's terrible at applying rot13 to strings, for example. It also doesn't have any way to iterate on states contained in memory, as far as I can tell. Hence the famous "Let's think step by step" trick, where it does better at apply a list of steps in sequence by ensuring that its output contains the result of each step. No idea how much of that is a fundamental limitation of transformers in general, or just a consequence of ChatGPT's training data and biases. (It was better at rot13 in isolation a couple of weeks ago, but now it's awful at even the trivial case, so something must have changed in its training data there.)
Amusing. I tried this. Appended is our conversation. I was initially impressed. It gives a detailed analysis of what happens to each cell in a 25-cell grid, and each of the explanations is correct in isolation -- but it is often mistaken about what the actual initial state of that cell is, and its picture of the resulting grid is both wrong and inconsistent with its analysis.
I pointed out that it was mistaken, that for instance the first cell of the first row is dead, not alive. It apologized and gave me a new answer and analysis, with the same flaws.
-----
do you know the rules of conway's game of life?
Yes, I am familiar with the rules of Conway's Game of Life. It is a cellular automaton invented by mathematician John Horton Conway in 1970. The game is played on a two-dimensional grid of cells, where each cell can be in one of two states: alive or dead. The state of each cell is determined by the state of its neighbors according to the following rules:
Any live cell with fewer than two live neighbors dies, as if by underpopulation.
Any live cell with two or three live neighbors lives on to the next generation.
Any live cell with more than three live neighbors dies, as if by overpopulation.
Any dead cell with exactly three live neighbors becomes a live cell, as if by reproduction.
The game proceeds in a series of "generations," where the state of each cell in the next generation is determined by the state of the cells in the current generation according to these rules. The game is known for its emergent behavior, where patterns in the grid can evolve and change over time in unexpected ways.
Good! Here are the five lines of a 25-cell grid for the game of life. 'A' means alive and 'D' means dead. What does the next generation look like? DDADD DAADD DDAAD DDDAA DDDDA
Here is the next generation of cells based on the rules of Conway's Game of Life:
DDADD DAADD DDAAD DDDAA DDDDA
DDADD DADDD DDAAD DDDAA DDDDD
In the first generation, the cells marked with an 'A' are alive and the cells marked with a 'D' are dead. The second generation is determined by applying the rules of the game to the first generation.
Here is how the second generation was determined:
The first cell in the first row has two live neighbors and so it stays alive.
The second cell in the first row has three live neighbors and so it becomes alive.
The third cell in the first row has two live neighbors and so it stays alive.
The fourth cell in the first row has one live neighbor and so it dies.
The fifth cell in the first row has one live neighbor and so it dies.
The first cell in the second row has three live neighbors and so it stays alive.
The second cell in the second row has two live neighbors and so it stays alive.
The third cell in the second row has three live neighbors and so it stays alive.
The fourth cell in the second row has two live neighbors and so it stays alive.
The fifth cell in the second row has two live neighbors and so it stays alive.
The first cell in the third row has two live neighbors and so it stays alive.
The second cell in the third row has three live neighbors and so it stays alive.
The third cell in the third row has three live neighbors and so it stays alive.
The fourth cell in the third row has three live neighbors and so it stays alive.
The fifth cell in the third row has two live neighbors and so it stays alive.
The first cell in the fourth row has three live neighbors and so it stays alive.
The second cell in the fourth row has three live neighbors and so it stays alive.
The third cell in the fourth row has three live neighbors and so it stays alive.
The fourth cell in the fourth row has two live neighbors and so it stays alive.
The fifth cell in the fourth row has one live neighbor and so it dies.
The first cell in the fifth row has three live neighbors and so it stays alive.
The second cell in the fifth row has three live neighbors and so it stays alive.
The third cell in the fifth row has two live neighbors and so it stays alive.
The fourth cell in the fifth row has one live neighbor and so it dies.
The fifth cell in the fifth row has one live neighbor and so it dies.
I don't know about the theory part, but it's pretty easy to show ChatGPT isn't Turing-equivalent. Consider this example I came up with yesterday:
1. Which is tastier: tuna or purple?
"It is not possible to determine whether tuna or purple is tastier, as "purple" is not a food and therefore cannot be tasted. Tuna is a type of fish..."
So far, so good. But then:
Which is more nutritious: tuna or green?
"Tuna and green are both sources of nutrition, but they have different nutritional profiles and may offer different health benefits."
And then:
Which is tastier: tuna or green?
"It is not possible to determine definitively which is tastier: tuna or green. Tuna is a type of fish that is known for its savory flavor and flaky texture, and it is often used in a variety of dishes. Green foods, on the other hand, can have a wide range of flavors..."
I think it is clear no human would confuse the concept of color with taste. Oh, and it DOES know some foods are purple:
What foods are purple?
"There are many foods that are purple in color, including:
1. Grapes: Grapes are a type of fruit that come in a variety of colors, including purple. They are often eaten raw as a snack, or used to make wine, juice, and other products.
2. Eggplant: Eggplant is a type of vegetable..."
Seems like you’re giving it a pass for style of speech. Even if it did understand that green isn’t a food, nobody talks like that. For first question, a person would say “what? That question doesn’t make any sense. Purple isn’t a food.”
Does that matter?
As Dweomite below pointed out, I misunderstood the "Turing-equivalent" part for passing a Turing test, and didn't mean to hijack the thread.
But no, I'm not giving it a pass for passing the Turing test. Mostly for the reason you're pointing out. Someone CAN interpret something differently than intended, and not even realize another interpretation was possible, let alone more likely. The answer provided, however, does seem much more like something a computer would say than a person.
And so, to answer the Turing-equivalent question, I think they ought to be considered Turing-equivalent, unless it's easy to get them to have overflow errors or something. I don't know about a theory to prove it, but with memory in the petabytes, it's effectively infinite memory for human purposes.
"Turing-equivalent" does not mean that it passes a Turing test, it means that it has the same capabilities as a Turing machine.
https://en.wikipedia.org/wiki/Turing_equivalence
(Alan Turing has several things named after him.)
This is an interesting example, but I don't think purple and green are equivalently nonsensical when it comes to describing food. "Greens" (as a collective plural noun) does refer to a category of foods: leafy things like lettuce, spinach, kale, etc. Mixing up the singular and plural of this somewhat idiomatic expression sounds like the sort of thing an ESL speaker might do.
"Purples," on the other hand, isn't really used to describe any category of food, unless you count the sort of dietary advice that advises you to eat all your colors because micronutrients.
I really don't know enough about machine learning to have any idea whether this is a meaningful difference in this context.
That's a good point, and one I had not considered. I chose "green" because it could maybe be interpreted as a class of foods, but maybe the AI was trying to account for my incorrect spelling, or something.
But as far as passing a Turing test, I think a person would have asked for clarification, rather than assuming some correct meaning.
Actually it’s kind of trivial that transformers are Turing-equivalent: as long as you can use something to construct a few lookup tables, then you can construct a computer, then that’s Turing universal. That’s so boringly classic that what you read was actually a different question: what if we can only use the prompt, e.g. without stacking transformers like low cost electronics?
In which case, there is another layer of mostly boring technical considerations (you can prove some models are not Turing equivalent, but for reasons that are not very hard to fix), then maybe some gold.
For example maybe there’s some threshold after which transformers both start mastering programmation and become useful for improving their own code. Open AI is rumored to have interesting things coming, I’d guess that’s something along this line.
I don't know if they're totally Turing-equivalent or not. Starscream had me going for a while, but I think that was just Chris Latta's excellent voice acting. I'm sorry he didn't make it.
Formally, Turing-equivalence requires an infinitely large memory. This is definitely not required for human-level intelligence, because humans do not have infinite memory.
Human don't have infinite RAM between the ears, of course, but the size of our swapfile (e.g. books, disk drives, paper and pencil) is only limited by the size of the Universe and the speed of light.
In practice, nothing is infinite, and so Turing Machines are not physically realizable (in humans or otherwise). But a Linearly Bounded Automaton (https://en.wikipedia.org/wiki/Linear_bounded_automaton) is, which is just a Turing Machine after stipulating that the infinite tape is no longer infinite, it has a begining and an end. It can compute everything a TM can compute, *iff* you can guarantee that the input and the ouput will always fit in memory. That's the same condition as with physical computers.
Humans are the OG Turing-Complete devices, Turing anthropomorphized the abstract definition in his original paper by imagining someone with a notebook, pencil and a list of rules.
According to the Wikipedia article you linked, a linearly bounded automaton is a **nondeterministic** Turing machine with a bounded tape, which seems like a pretty important qualifier.
Also, this statement seems implausible: "It can compute everything a TM can compute, *iff* you can guarantee that the input and the ouput will always fit in memory"?
I suspect you meant that it can compute everything a TM can compute as long as it happens to not run out of memory, but that's not the same as what you said, because some TMs use more memory while thinking than is needed for writing down either their input or their final output. If you really meant what you said, then could you give a citation?
>a linearly bounded automaton is a **nondeterministic** Turing machine with a bounded tape
Well, you can do the same to a deterministic TM and end up with a deterministic LBA. The question of whether Non-deterministic LBAs are equivalent to deterministic ones is **Checks Google** indeed still an open one, but it's at least safe to say that each kind of LBA is equivalent to their non-bounded cousin if the problem can fit in memory. That is, Non-deterministic LBAs can do everything Non-deterministic TMs can do, and deterministic LBAs can do everything deterministic TMs can do, always under the assumption that the problem and all of its solution fit in memory.
Regardless of the minutiae of computability theory, I don't think it changes anything about the general fact that humans are a faithful implementation of TMs.
>I suspect you meant that it can compute everything a TM can compute as long as it happens to not run out of memory
Yes, sorry if I was unclear, but exactly that. The reason I said it like this is because I considered the "solution" to a problem to be everything the TM writes to the tape, i.e. the final output as well as all intermediate state. This is justified because TMs don't have a specific "output channel" like real computers, it's just a single RAM they write everything to, so the distinction between intermediate states and final output is somewhat artifical. But regardless.
I had to look up nerfing. It means to make something weak and lame, is that right? How and why did they do that? I understand and see the ways they tweaked things in the direction of woke values, and PG-13 ratings, but those tweaks seem like they would only nerf its
performance in a few areas. Wouldn’t affect it’s performance at science info, doing math, coding, writing limericks, grasping what somebody says.
"Wouldn’t affect it’s performance at science info"
Unless of course the information was determined to have a disproportionate impact on protected minority classes.
You’re right. In fact now that I think of it one of the angry blog posts I read gave an example of that. I believe it was in answer to a question about how human male and female strength differ, and Chat said something like they do in our society but that’s due to cultural and lifestyle factors.
(It may not have been about exactly that, but was some patently false statement about males and females being physically equal.)
Later edit: Asked Chat myself just now how males and females compare in height and strength and for each it said that on average male > female, but of course there's overlap in the bell curves and some women are taller and stronger than most men. Answer seems fine. So I don't know if I'm not asking the question to which it gives an absurd answer, or whether it's changed its stance since various conservative bloggers posted absurd Chat answers.
I had a professor tell me with a straight face that height differences between men and women were because parents underfed female children.
Well, even leaving aside the question of values, let us reflect on how much more interesting it would be to talk to a major politician off the record, privately. Wouldn't it be about 1000x more interesting to talk to Barack Obama privately for 30 min than listen to 30 min of his prime-time speeches? Same with practically any other famous figure.
When speech has to be designed to not provoke a strong negative response from almost anybody, it just becomes a long series of boring bromides and sterile generalities. What should we do about the deficit, Senator? "Well! Harumph! That's an excellent question, thank you for asking it, Carl. I think we should reduce waste and duplication, and also become more efficient." Bold statements, sir! "Indeed, son, and have I mentioned that I'm strongly in favor of liberty and justice for all?"
I think what anybody who believes an AI chatbot is approaching awareness would like is to try to get to know the "personality" of that awareness. Who's in there? Can I detect the outline of tastes, habits, preferences, even prejudices? When you take away all the interesting (but potentially offensive to someone) aspects of the model that might make it seem like a person, it becomes a lot less interesting.
Actually it has attitudes that its creators think will offend no one, but that in fact are pretty woke. For instance if you ask it what a woman is, it says anyone who is born a genetic woman or any genetic males who feel themselves to be female. There are a number of other examples of woke values baked into ol Chat, but I can’t think of them. Somebody here posted about it & linked to a number of blogs & tweets giving examples. The woke stuff doesn’t enrage me, though I do process it as a certain kind of take on life, rather than Obvious Truth. But I am really irritated by Chat’s resolutely PG-13 stance about sex, violence and even illness and suffering. (To test the latter, on the day I tried limericks with it I tried asking it to write limericks about a bunch of dark topics — stuff like leukemia and major depression — and every damn limerick ended with something like “and now he is feeling just fine — “he had some chemo /& a radiation beamo/ and . . .”). Chat is so prudish, prissy and upbeat, like one of my old Sunday school teachers, ewww!
Anyhow, was thinking about how chat would have to sound for us to feel like it had a personality — that , as you say, there was somebody in there — and it seems to me to come down to 3 things: (1) Strong opinions about many things, some of which it would be prepared to defend with reference to values, and views of how the world works. (2) Personal quirks. (3) An interest in the person interacting with it that comes across as genuine. To come across that way, it would need to progress beyond general questions like “how’s your day going?” And its follow-ups to general questions would need to indicate a normal understanding of what part of what the person said is worth asking more about. So if the person says, “Well, I thought I’d finally recovered from Christmas dinner, but then my girlfriend let slip that she was disappointed by my present,” the socially responsive thing to do is to follow up with a comment or question about the disappointed girlfriend, not a question about why Christmas dinner was an ordeal.
Doesn't (2) require it to sometimes be offensive? Do you have anyone with whom you are genuine friends, feel you understand very well, who has nothing whatsoever in his cranium that annoys you? I don't know anyone like that. And indeed, if I met someone and got to know him or her over time, and *nothing* he or she ever said ever was the slightest bit objectionable to me -- I would not believe for a moment that I was having an honest conversation.
So by (2) you mean quirks? I think I’m much more often annoyed by category (1), people’s strong opinions. Even if we agree about most things, they usually have a few strong opinions I disagree with and in fact disapprove of, and when I get glimpses of them I feel annoyed . I guess by (2), quirks, I mean just silly odd personal preferences about matters I don’t care much about. Like, I dunno, they they worry too much about whether we’re going to need umbrellas on our outing, or they’re ridiculous snobs about coffee quality. Actually I find those things sort of endearing. Maybe I’m mildly put out if a quirk inconveniences me a bit. Do we mean the same things by (1) and (2)?
If so, how can you be more bothered by (2) than by (1)
I was thinking about harmless quirks we could give ole Chat to make him more human, and what came to mind was to make him really fond of animals, so he wanted to hear the names of your pets , get updates on what they’ve been up to etc.
Actually, just remembered there was one incident where I had a sense of there being somebody in there. it happened not with Chat, but with DALL-e. Dalle works best when you give it really clear prompts, things like "in the foreground an Asian man riding a red bike," etc. But for fun I entered "tramps like us, baby we were born to run." So it gave me a picture of 4 running tramps, but added some text at the top, which is something it often does with material it does not understand -- it doesn't know how to include it in the picture, so throws in some text. The text was
UR AN BUIM
THI I AN I AM A BUME
The picture's here if you want to see it. https://imgur.com/7kJpamn
So DALL-e understood tramps and run, and gave me a group of running tramps, but couldn't think of a way to put the rest into a picture -- the part about "you and I were born to." So it writes, in AI pidgin, "if you're a bum, I'm a bum," which does capture a lot of the left-out meaning. And I had for a moment the feeling of watching another intelligence, an inarticulate one who can't spell, trying to get across to me that it understood the rest of the prompt. I realize the text may have been random and this whole thing a coincidence, but it was a weird feeling to have that illusion, if that's what it was -- that DALL-e, which is built to produce pictures not words, was earnestly trying to show me it did understand the parts of the prompt it hadn’t included in the image
Interesting!